What happened to CrashLoopBackOff when Kubernetes deployed the application?

  docker, question

With Kubernetes cluster, 3 hosts, 1 master, 2 nodes.
Kubernetes’ version is 1.7
Deploying an application is similar to:

deployment.yaml

apiVersion: v1
 kind: Service
 metadata:
 name: server
 labels:
 app: server
 spec:
 ports:
 - port: 80
 selector:
 app: server
 tier: frontend
 type: LoadBalancer
 ---
 apiVersion: extensions/v1beta1
 kind: Deployment
 metadata:
 name: server
 labels:
 app: server
 spec:
 replicas: 3
 template:
 metadata:
 labels:
 app: server
 tier: frontend
 spec:
 containers:
 - image: 192.168.33.13/myapp/server
 name: server
 ports:
 - containerPort: 3000
 name: server
 imagePullPolicy: Always

192.168.33.13It is a mirror server, built with Harbor.
Harbor mirror server can be accessed from Kubernetes cluster.

Deployment execution (kubectl create -f deployment.yamlAfter that, for the first time, it seemed that pull was running locally and successfully in k8s cluster. After the container was restarted, it did not work:

$ kubectl get pods
 NAME                                                         READY     STATUS             RESTARTS   AGE
 server-962161505-kw3jf                                       0/1       CrashLoopBackOff   6          9m
 server-962161505-lxcfb                                       0/1       CrashLoopBackOff   6          9m
 server-962161505-mbnkn                                       0/1       CrashLoopBackOff   6          9m
$ kubectl describe pod server-962161505-kw3jf
 Name:           server-962161505-kw3jf
 Namespace:      default
 Node:           node1/192.168.33.11
 Start Time:     Mon, 13 Nov 2017 17:45:47 +0900
 Labels:         app=server
 pod-template-hash=962161505
 tier=backend
 Annotations:    kubernetes.io/created-by={"kind":"SerializedReference","apiVersion":"v1","reference":{"kind":"ReplicaSet","namespace":"default",  "name":"server-962161505","uid":"0acadda6-c84f-11e7-84b8-02178ad2db9a","  ...
 Status:         Running
 IP:             10.42.254.104
 Created By:     ReplicaSet/server-962161505
 Controlled By:  ReplicaSet/server-962161505
 Containers:
 server:
 Container ID:   docker://29eca3d9a20c60c83314101b036d742c5868c3bf25a39f28c5e4208bcdbfcede
 Image:          192.168.33.13/myapp/server
 Image ID:       docker-pullable://192.168.33.13/myapp/server@sha256:0e056e3ff5b1f1084e0946bc4211d33c6f48bc06dba7e07340c1609bbd5513d6
 Port:           3000/TCP
 State:          Waiting
 Reason:       CrashLoopBackOff
 Last State:     Terminated
 Reason:       Completed
 Exit Code:    0
 Started:      Tue, 14 Nov 2017 10:13:12 +0900
 Finished:     Tue, 14 Nov 2017 10:13:13 +0900
 Ready:          False
 Restart Count:  26
 Environment:    <none>
 Mounts:
 /var/run/secrets/kubernetes.io/serviceaccount from default-token-csjqn (ro)
 Conditions:
 Type           Status
 Initialized    True
 Ready          False
 PodScheduled   True
 Volumes:
 default-token-csjqn:
 Type:        Secret (a volume populated by a Secret)
 SecretName:  default-token-csjqn
 Optional:    false
 QoS Class:       BestEffort
 Node-Selectors:  <none>
 Tolerations:      node.alpha.kubernetes.io/notReady:NoExecute for 300s
 node.alpha.kubernetes.io/unreachable:NoExecute for 300s
 Events:
 Type     Reason                 Age                 From            Message
 ----     ------                 ----                ----            -------
 Normal   SuccessfulMountVolume  22m                 kubelet, node1  MountVolume.SetUp succeeded for volume "default-token-csjqn"
 Normal   SandboxChanged         22m                 kubelet, node1  Pod sandbox changed, it will be killed and re-created.
 Warning  Failed                 20m (x3 over 21m)   kubelet,   node1  Failed to pull image "192.168.33.13/myapp/server": rpc error: code = 2 desc = Error response from daemon: {"message":"Get http://  192.168.33.13/v2/: dial tcp 192.168.33.13:80: getsockopt: connection refused"}
 Normal   BackOff                20m (x5 over 21m)   kubelet, node1  Back-off pulling image "192.168.33.13/myapp/server"
 Normal   Pulling                4m (x7 over 21m)    kubelet, node1  pulling image "192.168.33.13/myapp/server"
 Normal   Pulled                 4m (x4 over 20m)    kubelet, node1  Successfully pulled image "192.168.33.13/myapp/server"
 Normal   Created                4m (x4 over 20m)    kubelet, node1  Created container
 Normal   Started                4m (x4 over 20m)    kubelet, node1  Started container
 Warning  FailedSync             10s (x99 over 21m)  kubelet, node1  Error syncing pod
 Warning  BackOff                10s (x91 over 20m)  kubelet, node1  Back-off restarting failed container

The same is true for putting the mirror image on the docker hub.

Has the problem been solved?