google-compute-engine,google-kubernetes-engine"/>
  • 9
name

A PHP Error was encountered

Severity: Notice

Message: Undefined index: userid

Filename: views/question.php

Line Number: 191

Backtrace:

File: /home/prodcxja/public_html/questions/application/views/question.php
Line: 191
Function: _error_handler

File: /home/prodcxja/public_html/questions/application/controllers/Questions.php
Line: 433
Function: view

File: /home/prodcxja/public_html/questions/index.php
Line: 315
Function: require_once

I can (for instance) connect to the cluster compute nodes like this: gcloud compute ssh gke-test-deploy-default-pool-xxxxx --internal-ip

But if I try to set up my kubectl credentials like this: gcloud container clusters get-credentials test-deploy --internal-ip It complains:

ERROR: (gcloud.container.clusters.get-credentials) cluster test-deploy is not a private cluster.

I am able to do non-ssh type commands like kubectl get pods --all-namespaces, but if I do kubectl exec -it rabbitmq-podnumber -n backbone-testdeploy bash I get this error:

Error from server: error dialing backend: No SSH tunnels currently open. Were the targets able to accept an ssh-key for user "gke-xxxxxxx"

BTW, the whole point of this is to use Google Cloud NAT on my cluster so that I have a consistent external IP for all pods when connecting to an external service (Atlas) which uses an IP whitelist. I can see the NAT working for the compute instances, but I cannot connect to the pods to check them.

The Master node and the worker nodes are each in different networks, the master is in a Google managed one and the worker nodes are in your VPC. With a standard cluster, the master communicates with the nodes via external IP. With a private cluster, the Master node and the worker nodes are connected via a network peering and communicate via internal IPs.

This causes problems when connecting to the master directly over other peered networks or VPN connections because the network peering routes to the master are not propagated over VPNs and network peering.

For your use case, disable the external master endpoint. Once this is done, when you run the get-credentials command, your kube config will have the internal master endpoint instead of the external one. You will then need to connect to your master (kubectl) from the VPC network (a bastion host or a proxy).

Instead, I recommend leaving the external end point active, use get-credentials without using --internal-ip so that your kube config will use the external end point and thus you can connect from anywhere. To make sure that you master is secure, use Master Authorized Networks to define the external IPs or CIDRs you will be connecting from

I am fairly certain the kubectl exec and logs commands are failing because of how you are getting the credentials.

One last thing worth checking as well, GKE creates firewall rules and routes automatically (they will be called gke-...), these are required to ensure that SSH tunnels from the master to the nodes work properly.

  • 1
Reply Report
    • you can still use master authorized networks with private clusters and Cloud NAT. All your worker nodes will have no external IP, so all egress traffic will use Cloud NAT. The Master will have an external endpoint so you can run kubectl commands from outside your cluster
    • Fair enough. I cannot finish this up right now to verify, so I'll go ahead and check this as the answer.
      • 1
    • This is a very helpful answer, but I'm concerned that using Master Authorized Networks might not solve my original problem - I need to force the cluster to use a single static IP on egress (as with a NAT). This is so the whitelist on my 3rd party service (Atlas) remains valid as the cluster auto-scales up. It seems that the only way the Cloud NAT will assume responsibility for the cluster members is if I prevent them from having external IPs.

Warm tip !!!

This article is reproduced from Stack Exchange / Stack Overflow, please click

Trending Tags