Services DevOps DevSecOps Cloud Consulting Infrastructure Automation Managed Services AIOps MLOps DataOps Microservices 🔐 Private AINEW Solutions DevOps Transformation CI/CD Automation Platform Engineering Security Automation Zero Trust Security Compliance Automation Cloud Migration Kubernetes Migration Cloud Cost Optimisation AI-Powered Operations Data Platform Modernisation SRE & Observability Legacy Modernisation Managed IT Services 🔐 Private AI DeploymentNEW Products ✨ ZippyOPS AINEW 🛡️ ArmorPlane 🔒 DevSecOpsAsService 🖥️ LabAsService 🤝 Collab 🧪 SandboxAsService 🎬 DemoAsService Bootcamp 🔄 DevOps Bootcamp ☁️ Cloud Engineering 🔒 DevSecOps 🛡️ Cloud Security ⚙️ Infrastructure Automation 📡 SRE & Observability 🤖 AIOps & MLOps 🧠 AI Engineering 🎓 ZOLS — Free Learning Company About Us Projects Careers Get in Touch

Kubernetes Networking: NodePorts, LoadBalancers & Ingresses

Kubernetes Networking: NodePorts, LoadBalancers, Ingresses, and More

Kubernetes networking can be a daunting subject, especially for beginners. Despite thorough readings of the official documentation, many developers, including myself, struggle to grasp the core concepts. However, after watching several tutorials and experimenting hands-on, I finally developed a deeper understanding of Kubernetes networking.

In this blog, I will break down essential Kubernetes networking resources—NodePorts, LoadBalancers, Ingresses—and address frequently asked questions such as:

  • How does a Kubernetes networking resource solve the Service Discovery problem?
  • Does the LoadBalancer service really provision a load balancer automatically?
  • How do production-ready Kubernetes clusters expose their applications?
  • What’s the difference between an Ingress and an Ingress controller?

Let’s dive in.

Diagram of Kubernetes networking with NodePorts, LoadBalancers, and Ingress.

Understanding Kubernetes Networking Resources

Before we dive deeper, let’s assume you are already familiar with Kubernetes YAML resource definitions. Additionally, for this discussion, imagine you have deployed a Kubernetes cluster across three VMs with the following public IPs:

  • Node A: 192.168.0.1
  • Node B: 192.168.0.2
  • Node C: 192.168.0.3

Your cluster runs a microservices-based application with services deployed across the nodes. The goal is to understand how these services communicate both internally and externally.

Solving the Service Discovery Problem in Kubernetes

In Kubernetes, Pod IPs are dynamic and can change whenever a pod restarts. This causes potential communication failures between services if IP addresses are hardcoded. To solve this, Kubernetes offers a Service resource, which provides a stable network identity (static IP address and DNS name) for a pod.

The Service resource ensures that even if a pod restarts and its IP changes, the Service’s network identity remains constant. For example, you can use a Service to maintain communication between the $Products_A and $Reviews_A services without worrying about IP changes.

Types of Kubernetes Services

Kubernetes offers different types of Services to handle internal and external communications. Let’s take a look at the most common ones.

ClusterIP Service: Internal Communication

The ClusterIP service is used for internal communication within a Kubernetes cluster. It exposes the service on a static IP that can be used by other pods in the same namespace or across different namespaces.

Here’s an example YAML configuration for a ClusterIP service:

apiVersion: v1
kind: Service
metadata:
  labels:
    app: reviews
  name: reviews
spec:
  ports:
    - name: http
      port: 5000
      protocol: TCP
      targetPort: 80
  selector:
    app: reviews
  type: ClusterIP

Using this service, pods within the same namespace can communicate via DNS resolution, like so:

http://reviews:5000

For inter-namespace communication, use the fully qualified domain name (FQDN) like this:

http://reviews.reviews.svc.cluster.local:5000

NodePort Service: Exposing Services Externally

If you want to expose a service to the external network, Kubernetes provides the NodePort service. This type allows external access to a service through a port on the node’s IP. Kubernetes automatically assigns a port from a range (30000-32767).

Here’s an example YAML for a NodePort service:

apiVersion: v1
kind: Service
metadata:
  labels:
    app: reviews
  name: reviews
spec:
  ports:
    - name: http
      port: 5000
      protocol: TCP
      targetPort: 80
  selector:
    app: reviews
  type: NodePort

Once this service is created, you can access your application by hitting the node’s IP address and the assigned port, for example:

192.168.0.1:30519

However, using raw IP addresses and ports is not user-friendly. This is where a LoadBalancer service comes in handy.

LoadBalancer Service: Automatic Load Balancing

The LoadBalancer service type provisions an external load balancer, making it ideal for cloud environments like AWS, GCP, and Azure. The LoadBalancer automatically handles traffic distribution between nodes, making it easier to scale applications.

Here’s an example YAML configuration for a LoadBalancer service:

apiVersion: v1
kind: Service
metadata:
  labels:
    app: reviews
  name: reviews
spec:
  ports:
    - name: http
      port: 5000
      protocol: TCP
      targetPort: 80
  selector:
    app: reviews
  type: LoadBalancer

Once you apply this YAML, Kubernetes automatically provisions a cloud load balancer, which will handle incoming traffic and distribute it across your nodes.

However, note that LoadBalancer services only work in cloud environments with integrated cloud controllers. If you’re running Kubernetes locally or on-premise, you won’t get the automatic load balancer.

Ingress and Ingress Controllers: Advanced Routing

When managing multiple services, especially in production environments, using a LoadBalancer or NodePort for each service can become cumbersome and inefficient. Kubernetes introduces Ingress and Ingress controllers to solve this problem.

An Ingress controller is a resource that manages the routing of external HTTP and HTTPS traffic to the appropriate services inside the Kubernetes cluster. The Ingress resource defines the rules for routing traffic based on various parameters like hostnames, paths, and subdomains.

Here’s an example of an Ingress resource for routing traffic:

apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: ingress-wildcard-host
spec:
  ingressClassName: nginx
  rules:
    - host: "products.myapp.com"
      http:
        paths:
          - pathType: Prefix
            path: "/products"
            backend:
              service:
                name: products-app
                port:
                  number: 80
    - host: "ratings.foo.com"
      http:
        paths:
          - pathType: Prefix
            path: "/ratings"
            backend:
              service:
                name: ratings-app
                port:
                  number: 80

With the above configuration, requests to products.myapp.com/products are forwarded to the products-app service, and requests to ratings.foo.com/ratings are routed to the ratings-app service.

Choosing Between NodePort and LoadBalancer Services

Choosing between NodePort and LoadBalancer services depends on your environment. For local development or on-premise Kubernetes, NodePort is a flexible solution. However, for production-ready environments, especially on cloud providers like AWS, GCP, or Azure, LoadBalancer is the preferred option because it automates the provisioning of a cloud load balancer.

At the same time, Kubernetes allows you to implement custom solutions, such as using Ingress controllers for more advanced routing, especially when handling multiple applications.

Conclusion: Kubernetes Networking Made Simple

In summary, Kubernetes provides several resources for managing networking:

  • ClusterIP: Used for internal communication and service discovery.
  • NodePort: Exposes applications externally, often used in development.
  • LoadBalancer: Exposes applications externally in production, with cloud integration.
  • Ingress: Handles complex routing and manages multiple services under a single entry point.

By understanding these concepts, you can effectively manage your Kubernetes services and expose your applications both internally and externally. Whether you’re developing on a local cluster or managing a production environment, Kubernetes provides a range of options to suit your needs.

Need Help with Kubernetes or DevOps?

At ZippyOPS, we offer expert consulting, implementation, and managed services to help you optimize your Kubernetes clusters and implement DevOps best practices. From Cloud and Microservices to DevSecOps, MLOps, and more, we ensure your infrastructure is both secure and scalable.

Explore our services, solutions, and products to learn more.

For more information or a demo, visit our YouTube channel. For inquiries, contact us at sales@zippyops.com.

Leave a Comment

Your email address will not be published. Required fields are marked *

Scroll to Top