50 Kubernetes for .NET DevOps

 

Top 50 Kubernetes for .NET DevOps Questions & Answers

This post provides a comprehensive list of 50 frequently asked interview questions related to Kubernetes within the context of Azure, AWS, and .NET development, focusing on DevOps practices. It's designed to help candidates prepare for roles that require a strong understanding of container orchestration for .NET applications deployed on cloud platforms.

Section 1: Kubernetes Fundamentals

1. What is Kubernetes and why is it important for containerized .NET applications?

Answer: Kubernetes (K8s) is an open-source container orchestration platform that automates the deployment, scaling, and management of containerized applications. For .NET applications, especially microservices, K8s provides a robust framework for:

  • Automated Deployment: Deploying and updating application instances.

  • Scaling: Automatically scaling applications up or down based on demand.

  • Self-healing: Restarting failed containers, replacing unhealthy ones.

  • Load Balancing: Distributing traffic across multiple instances.

  • Service Discovery: Allowing services to find each other easily.

  • Resource Management: Efficiently allocating CPU and memory.

2. Explain the core components of a Kubernetes cluster.

Answer: A Kubernetes cluster consists of:

  • Control Plane (Master Node):

    • kube-apiserver: Exposes the Kubernetes API.

    • etcd: Consistent and highly available key-value store for cluster data.

    • kube-scheduler: Watches for new Pods and assigns them to Nodes.

    • kube-controller-manager: Runs controller processes (Node Controller, Replication Controller, etc.).

    • cloud-controller-manager (optional): Integrates with cloud provider APIs.

  • Worker Nodes (Minions):

    • kubelet: An agent that runs on each node, ensuring containers are running in a Pod.

    • kube-proxy: Maintains network rules on nodes, enabling network communication to Pods.

    • Container Runtime: (e.g., Docker, containerd, CRI-O) responsible for running containers.

3. What is a Pod in Kubernetes?

Answer: A Pod is the smallest deployable unit in Kubernetes. It represents a single instance of a running process in a cluster. A Pod contains one or more containers (which share network namespace, IP address, and storage), and shared resources for those containers. For .NET, a Pod typically runs one .NET application container.

4. What is the purpose of a Deployment in Kubernetes?

Answer: A Deployment is a higher-level object that manages the desired state of your Pods. It describes how many replicas of a Pod should be running, how to update them (e.g., rolling updates), and how to roll back to a previous version. Deployments ensure that your application remains available and updated as specified.

5. Explain the concept of a Service in Kubernetes.

Answer: A Service is an abstract way to expose an application running on a set of Pods as a network service. Services provide a stable IP address and DNS name for a group of Pods, even if the underlying Pods are created, deleted, or moved. This enables reliable communication between microservices and external access.

6. What are the different types of Kubernetes Services?

Answer:

  • ClusterIP (Default): Exposes the Service on a cluster-internal IP. Only reachable from within the cluster.

  • NodePort: Exposes the Service on a static port on each Node's IP. Makes the service accessible from outside the cluster via <NodeIP>:<NodePort>.

  • LoadBalancer: Exposes the Service externally using a cloud provider's load balancer (e.g., Azure Load Balancer, AWS ELB).

  • ExternalName: Maps the Service to the contents of the externalName field (e.g., a DNS name) by returning a CNAME record.

7. What is a Namespace in Kubernetes?

Answer: Namespaces are a way to divide cluster resources among multiple users or teams. They provide a scope for names (e.g., Pods, Services, Deployments within a namespace must have unique names) and are used for resource isolation and access control.

8. How does Kubernetes achieve self-healing?

Answer: Kubernetes achieves self-healing through various controllers:

  • Replication Controller/ReplicaSet: Ensures a specified number of Pod replicas are always running. If a Pod fails, it restarts it.

  • Liveness Probes: Periodically check if a container is running and healthy. If a probe fails, Kubernetes restarts the container.

  • Readiness Probes: Check if a container is ready to serve traffic. If a probe fails, Kubernetes stops sending traffic to that Pod until it's ready.

  • Node Controller: Detects and responds when nodes go down.

9. What is kubectl?

Answer: kubectl is the command-line tool for running commands against Kubernetes clusters. It allows you to deploy applications, inspect and manage cluster resources, and view logs.

10. What is a ConfigMap and why is it useful for .NET applications?

Answer: A ConfigMap is a Kubernetes object used to store non-sensitive configuration data as key-value pairs. For .NET applications, it's useful for:

  • Storing application settings (e.g., API URLs, feature flags) separate from the image.

  • Injecting configuration into Pods as environment variables or mounted files. This allows you to change configuration without rebuilding your Docker image.

Section 2: Kubernetes with .NET Applications

11. How do you deploy a .NET Core web application to Kubernetes?

Answer:

  1. Containerize: Create a Dockerfile for your .NET Core app and build the Docker image.

  2. Push Image: Push the image to a container registry (e.g., Docker Hub, ACR, ECR).

  3. Kubernetes Manifests: Write YAML files for:

    • Deployment: To define your Pods (containing the .NET app image) and their replica count.

    • Service: To expose your .NET app to other services or external traffic.

  4. Apply Manifests: Use kubectl apply -f <your-deployment.yaml> and kubectl apply -f <your-service.yaml>.

12. How do you manage database connection strings for a .NET app in Kubernetes?

Answer:

  • Kubernetes Secrets: The recommended way for sensitive data. Store connection strings as Secrets and inject them into Pods as environment variables or mounted files.

  • ConfigMaps (for non-sensitive parts): Combine with Secrets for the full connection string.

  • Cloud-specific Secret Management: Integrate with Azure Key Vault (via CSI driver or Workload Identity) or AWS Secrets Manager (via IAM roles for service accounts/CSI driver).

13. Explain how Kubernetes handles scaling for .NET microservices.

Answer: Kubernetes supports:

  • Horizontal Pod Autoscaler (HPA): Automatically scales the number of Pod replicas based on observed CPU utilization, memory usage, or custom metrics. This is crucial for .NET microservices to handle varying loads.

  • Vertical Pod Autoscaler (VPA): (Still in beta/alpha for some features) Automatically adjusts CPU and memory requests/limits for containers in Pods based on historical usage.

  • Cluster Autoscaler: Automatically adjusts the number of nodes in your cluster based on pending Pods or underutilized nodes.

14. How would you ensure high availability for a .NET application in Kubernetes?

Answer:

  • Multiple Replicas: Configure Deployments with multiple Pod replicas.

  • Node Distribution: Kubernetes schedules Pods across different nodes.

  • Anti-Affinity Rules: Configure Pods to avoid being scheduled on the same node or availability zone.

  • Liveness and Readiness Probes: Ensure only healthy and ready Pods receive traffic.

  • Pod Disruption Budgets (PDBs): Ensure a minimum number of Pods are available during voluntary disruptions (e.g., node maintenance).

  • Managed Kubernetes Services: Use AKS or EKS which handle control plane high availability.

15. What are Liveness and Readiness Probes in Kubernetes, and how do they apply to .NET apps?

Answer:

  • Liveness Probe: Checks if the .NET application container is still running. If it fails, Kubernetes restarts the container. For .NET, this could be an HTTP GET to a /health/live endpoint.

  • Readiness Probe: Checks if the .NET application is ready to serve traffic. If it fails, Kubernetes removes the Pod from the Service's endpoints until it's ready. For .NET, this could be an HTTP GET to a /health/ready endpoint that also checks dependencies (database, external services).

16. How do you perform rolling updates for a .NET application in Kubernetes?

Answer: Kubernetes Deployments natively support rolling updates. When you update the image tag or configuration in your Deployment manifest and apply it (kubectl apply -f deployment.yaml), Kubernetes:

  1. Creates new Pods with the updated image/config.

  2. Gradually brings up the new Pods.

  3. Gradually terminates the old Pods. This ensures zero downtime during updates. You can configure update strategy (e.g., maxUnavailable, maxSurge).

17. How would you debug a .NET application running in Kubernetes?

Answer:

  • kubectl logs <pod_name>: View standard output/error logs from your .NET app.

  • kubectl exec -it <pod_name> -- bash: Get an interactive shell inside the container to inspect files, run commands.

  • Port Forwarding: kubectl port-forward <pod_name> <local_port>:<container_port> to access the application or debugger port locally.

  • Remote Debugging: Configure your .NET app for remote debugging (e.g., vsdbg) and expose the debugger port via a Service or port-forwarding.

  • Ephemeral Containers (Kubernetes 1.25+): Temporarily attach a debugging container to a running Pod.

18. What are resource requests and limits in Kubernetes for .NET Pods?

Answer:

  • Requests: The minimum amount of CPU and memory guaranteed for a Pod. Kubernetes uses requests for scheduling decisions. Your .NET app is guaranteed to get at least this much.

  • Limits: The maximum amount of CPU and memory a Pod can consume. If a Pod exceeds its memory limit, it will be terminated. If it exceeds its CPU limit, it will be throttled. Importance for .NET: Setting appropriate requests and limits is crucial for performance, stability, and efficient resource utilization of your .NET applications in the cluster.

19. How do you handle persistent data for stateful .NET applications (e.g., a local cache or search index) in Kubernetes?

Answer:

  • PersistentVolume (PV): A piece of storage in the cluster provisioned by an administrator or dynamically provisioned by a StorageClass.

  • PersistentVolumeClaim (PVC): A request for storage by a user (or Pod).

  • StatefulSets: Used to manage stateful applications. They ensure stable network identities, stable persistent storage, and ordered graceful deployment/scaling.

  • Managed Cloud Databases: For actual databases, use managed services like Azure SQL Database, AWS RDS, etc., outside the Kubernetes cluster.

20. What is an Ingress in Kubernetes and how does it help expose .NET web apps?

Answer: An Ingress is a Kubernetes API object that manages external access to services within a cluster, typically HTTP/HTTPS. It provides:

  • External Access: Exposes your .NET web applications to the internet.

  • Load Balancing: Distributes incoming traffic across multiple Pods.

  • SSL/TLS Termination: Handles HTTPS encryption/decryption.

  • Name-based Virtual Hosting: Routes traffic to different services based on hostname.

  • Path-based Routing: Routes traffic based on URL paths.

Section 3: Kubernetes with Azure (AKS)

21. What is Azure Kubernetes Service (AKS)?

Answer: AKS is a managed Kubernetes service offered by Microsoft Azure. Azure manages the Kubernetes control plane (master nodes), allowing users to focus on managing their application deployments and worker nodes. It simplifies the deployment, management, and scaling of containerized applications using Kubernetes.

22. What are the benefits of using AKS for .NET microservices?

Answer:

  • Managed Control Plane: Reduces operational overhead.

  • Integration with Azure Services: Seamless integration with Azure Container Registry (ACR), Azure Active Directory, Azure Key Vault, Azure Monitor, etc.

  • Scalability: Easy scaling of worker nodes and Pods.

  • Cost Optimization: Pay only for worker nodes, with options for spot instances and auto-scaling.

  • Security: Integration with Azure security features, network policies.

  • Hybrid Capabilities: Azure Arc for managing on-premises Kubernetes clusters.

23. How do you integrate Azure Container Registry (ACR) with AKS for .NET images?

Answer:

  1. Push Image: Push your .NET Docker images to ACR.

  2. Authentication: Grant AKS the necessary permissions to pull images from ACR. This is typically done by attaching an Azure AD service principal or managed identity to the AKS cluster that has AcrPull role on the ACR.

  3. ImagePullSecrets (if needed): For older methods or specific scenarios, you might use imagePullSecrets in your Kubernetes manifests, but managed identity is preferred.

24. How do you manage secrets for .NET applications in AKS using Azure Key Vault?

Answer:

  1. Azure AD Workload Identity (Recommended): Configure Workload Identity in AKS to allow Pods to authenticate with Azure AD.

  2. Azure Key Vault CSI Driver: Install the CSI (Container Storage Interface) driver for Azure Key Vault. This allows you to mount Key Vault secrets as files within your Pods.

  3. .NET Application: Your .NET application can then read these secrets from the mounted files (e.g., using IConfiguration in ASP.NET Core) or use the Azure SDK for .NET to directly access Key Vault with the Pod's identity.

25. How do you expose a .NET web application in AKS to the internet?

Answer:

  • LoadBalancer Service: Create a Service of type LoadBalancer. Azure will provision an Azure Load Balancer that exposes your .NET app to the internet.

  • Ingress Controller: Deploy an Ingress Controller (e.g., Nginx Ingress Controller, Azure Application Gateway Ingress Controller - AGIC) and define Ingress resources. This provides more advanced routing, SSL termination, and WAF capabilities.

26. How would you monitor a .NET application deployed to AKS?

Answer:

  • Azure Monitor for Containers (Container Insights): Provides comprehensive monitoring for AKS clusters, including Pod health, resource utilization, and logs.

  • Application Insights (.NET): Integrate the Application Insights SDK into your .NET app for application-level telemetry, distributed tracing, and performance metrics.

  • Log Analytics Workspace: All logs from AKS (container logs, node logs, control plane logs) are sent here for centralized querying and analysis using KQL.

  • Prometheus & Grafana: Deploy these open-source tools within AKS for custom metrics collection and visualization.

27. What is Azure AD Workload Identity in AKS and why is it important for .NET apps?

Answer: Azure AD Workload Identity allows Kubernetes Pods to authenticate with Azure Active Directory and access Azure resources (like Key Vault, Storage Accounts) using Azure AD identities. Importance for .NET: It eliminates the need for managing service principal secrets in your Pods, providing a more secure and seamless way for your .NET applications to access Azure services with fine-grained permissions.

28. How do you handle persistent storage for .NET applications in AKS?

Answer:

  • Azure Disk: For single-Pod persistent storage (e.g., a single instance of a .NET app needing a local file store). Used with ReadWriteOnce access mode.

  • Azure Files: For shared persistent storage that can be accessed by multiple Pods simultaneously (ReadWriteMany access mode). Useful for shared configurations or content.

  • Azure NetApp Files: High-performance shared file storage, suitable for demanding workloads.

  • StorageClasses: Define different types of storage, and PVCs request storage from these classes.

29. How can Azure DevOps Pipelines be used for CI/CD of .NET Docker applications to AKS?

Answer:

  1. Build Stage: Use DotNetCoreCLI@2 to build/test .NET app, then Docker@2 to build and push the Docker image to ACR.

  2. Deploy Stage:

    • Use KubernetesManifest@1 task to deploy YAML manifests to AKS.

    • Integrate with Azure Key Vault for secrets.

    • Implement blue/green or canary deployments using Ingress or Service Mesh.

30. What is the role of Azure Application Gateway Ingress Controller (AGIC) in AKS for .NET apps?

Answer: AGIC integrates Azure Application Gateway (a web traffic load balancer and WAF) directly with AKS. It allows you to use Application Gateway as the Ingress for your .NET web applications, providing:

  • Advanced Routing: URL-based routing, host-based routing.

  • SSL/TLS Termination: Centralized certificate management.

  • Web Application Firewall (WAF): Protection against common web vulnerabilities.

  • Centralized Management: Manage Ingress rules directly through Kubernetes Ingress resources.

Section 4: Kubernetes with AWS (EKS)

31. What is Amazon Elastic Kubernetes Service (EKS)?

Answer: EKS is a managed Kubernetes service offered by Amazon Web Services (AWS). AWS manages the Kubernetes control plane (master nodes), providing a highly available and scalable Kubernetes environment. Users deploy their containerized applications onto worker nodes (EC2 instances or Fargate) managed by EKS.

32. What are the benefits of using EKS for .NET microservices?

Answer:

  • Managed Control Plane: Reduces operational burden.

  • Integration with AWS Services: Deep integration with AWS IAM, VPC, ELB, CloudWatch, ECR, etc.

  • Scalability: Auto-scaling of worker nodes (EC2) and Pods (HPA).

  • Security: Leverages AWS IAM for strong authentication and authorization.

  • Flexibility: Allows full control over Kubernetes configurations and extensions.

33. How do you integrate Amazon Elastic Container Registry (ECR) with EKS for .NET images?

Answer:

  1. Push Image: Push your .NET Docker images to ECR.

  2. IAM Roles for Service Accounts (IRSA): Create an IAM role with ecr:GetDownloadUrlForLayer, ecr:BatchGetImage, ecr:BatchCheckLayerAvailability permissions.

  3. Associate IAM Role with Kubernetes Service Account: Annotate your Kubernetes Service Account with the ARN of the IAM role.

  4. Pod Configuration: Configure your .NET application's Pods to use this Service Account. EKS automatically injects credentials, allowing the Pod to pull images from ECR securely.

34. How do you manage secrets for .NET applications in EKS using AWS Secrets Manager or Parameter Store?

Answer:

  • IAM Roles for Service Accounts (IRSA): Grant your Pod's Service Account permissions to read secrets from AWS Secrets Manager or parameters from AWS Systems Manager Parameter Store.

  • AWS SDK for .NET: Your .NET application uses the AWS SDK to programmatically retrieve secrets/parameters at runtime.

  • Secrets Store CSI Driver: Install the Secrets Store CSI Driver and configure a SecretProviderClass to mount secrets from Secrets Manager as files into your Pods. Your .NET app can then read these files.

35. How do you expose a .NET web application in EKS to the internet?

Answer:

  • LoadBalancer Service: Create a Service of type LoadBalancer. AWS will provision an Elastic Load Balancer (ELB/ALB) that exposes your .NET app.

  • AWS Load Balancer Controller (Recommended): Deploy the AWS Load Balancer Controller. It provisions an Application Load Balancer (ALB) when you create an Ingress resource. This provides advanced HTTP/HTTPS routing, SSL termination, and WAF integration.

  • NodePort Service: Less common for production, but can be used for direct access via node IP.

36. How would you monitor a .NET application deployed to EKS?

Answer:

  • Amazon CloudWatch Container Insights: Provides detailed monitoring for EKS clusters, including Pod health, resource utilization, and logs.

  • AWS X-Ray: For distributed tracing of requests across your .NET microservices.

  • Prometheus & Grafana: Deploy these open-source tools within EKS for comprehensive metrics collection and visualization.

  • AWS SDK for .NET: Integrate application-level metrics and custom logs into CloudWatch.

37. What is IAM Roles for Service Accounts (IRSA) in EKS and why is it important for .NET apps?

Answer: IRSA allows you to associate an AWS IAM role with a Kubernetes Service Account. This enables Pods that use that Service Account to inherit the permissions of the IAM role, providing fine-grained access control to AWS resources. Importance for .NET: It provides a secure and granular way for your .NET applications running in Pods to access AWS services (e.g., S3, DynamoDB, SQS, Secrets Manager) without needing to store AWS credentials directly in the container or rely on node-level IAM roles.

38. How do you handle persistent storage for .NET applications in EKS?

Answer:

  • Amazon EBS Volumes: Can be provisioned dynamically via a StorageClass and attached to individual Pods (ReadWriteOnce access mode).

  • Amazon EFS: A shared file system that can be mounted by multiple Pods simultaneously (ReadWriteMany access mode). Useful for shared content or state.

  • Amazon FSx for Windows File Server: For Windows containers needing SMB shares.

  • StorageClasses: Define different types of storage, and PVCs request storage from these classes.

39. How can AWS CodePipeline and CodeBuild be used for CI/CD of .NET Docker applications to EKS?

Answer:

  1. Source Stage: CodeCommit, GitHub, etc.

  2. Build Stage (CodeBuild):

    • Builds .NET application (dotnet publish).

    • Builds Docker image.

    • Pushes image to ECR.

  3. Deploy Stage (CodeDeploy or direct EKS deployment):

    • Updates Kubernetes Deployment manifest with the new image tag.

    • Applies the manifest to EKS using kubectl commands.

    • Can integrate with CodeDeploy for advanced deployment strategies (e.g., blue/green).

40. What is AWS Fargate for EKS and when would you use it for .NET apps?

Answer: AWS Fargate for EKS allows you to run Kubernetes Pods without provisioning or managing EC2 worker nodes. When to use for .NET:

  • For stateless .NET microservices where you want to completely abstract away server management.

  • When you only pay for the resources your Pods consume, offering a serverless experience for Kubernetes.

  • For workloads with unpredictable scaling needs, as Fargate scales compute capacity automatically.

Section 5: DevOps Best Practices with Kubernetes, Azure, and AWS

41. How does Kubernetes contribute to immutable infrastructure in DevOps?

Answer: Kubernetes enforces immutable infrastructure by managing containerized applications. Once a Docker image is built and deployed, it's not modified. Updates are performed by deploying new versions of images, and Kubernetes handles the rolling replacement of old Pods with new ones, ensuring consistency and predictability.

42. Explain the concept of GitOps in a Kubernetes context.

Answer: GitOps is an operational framework that uses Git as the single source of truth for declarative infrastructure and applications. In Kubernetes:

  • All application and infrastructure configurations (Kubernetes manifests, Helm charts, Terraform) are stored in Git.

  • An automated agent (e.g., Flux CD, Argo CD) continuously monitors the Git repository.

  • Any changes in Git are automatically detected and applied to the Kubernetes cluster, ensuring the cluster state always matches the Git repository. This provides auditable, traceable, and automated deployments.

43. What is a Helm chart and why is it useful for deploying .NET applications to Kubernetes?

Answer: Helm is the package manager for Kubernetes. A Helm chart is a collection of files that describe a related set of Kubernetes resources (e.g., Deployments, Services, ConfigMaps, Secrets) for an application. Usefulness for .NET:

  • Packaging: Packages your .NET application and its dependencies into a single deployable unit.

  • Templating: Allows parameterization of configurations (e.g., image tags, replica counts, connection strings) for different environments.

  • Version Control: Charts can be versioned and managed.

  • Reusability: Easily deploy the same .NET application across multiple clusters or namespaces with different configurations.

44. How do you handle centralized logging and monitoring for .NET applications in Kubernetes?

Answer:

  • Centralized Logging:

    • Fluentd/Fluent Bit/Logstash: Agents running on each node to collect container logs (from stdout/stderr).

    • Elasticsearch/OpenSearch: For storing and indexing logs.

    • Kibana/Grafana: For visualizing and querying logs.

    • Cloud-native solutions: Azure Monitor, AWS CloudWatch Logs.

  • Centralized Monitoring:

    • Prometheus: For collecting metrics (e.g., from kube-state-metrics, node-exporter, application-specific metrics exposed by .NET apps).

    • Grafana: For visualizing Prometheus metrics and creating dashboards.

    • Application Insights (.NET): For deep application-level telemetry.

    • Cloud-native solutions: Azure Monitor for Containers, AWS CloudWatch Container Insights.

45. What are Kubernetes Network Policies and how can they enhance security for .NET microservices?

Answer: Network Policies are Kubernetes resources that define how Pods are allowed to communicate with each other and with external network endpoints. Enhance Security for .NET:

  • Isolation: Isolate .NET microservices, allowing only necessary communication paths (e.g., frontend can talk to API, but not directly to database).

  • Least Privilege: Enforce the principle of least privilege for network access.

  • Segment Networks: Create logical network segments within the cluster. This helps prevent unauthorized access and lateral movement in case of a compromise.

46. Explain the concept of Ingress vs. Service Mesh for .NET microservices in Kubernetes.

Answer:

  • Ingress: Primarily handles north-south traffic (traffic entering/leaving the cluster). It provides external routing, load balancing, and SSL termination for your .NET web apps.

  • Service Mesh (e.g., Istio, Linkerd, Dapr): Handles east-west traffic (service-to-service communication within the cluster). It provides advanced capabilities like:

    • Traffic management (A/B testing, canary deployments, retries, circuit breakers).

    • Observability (distributed tracing, metrics).

    • Security (mutual TLS). Relationship: They complement each other. Ingress brings traffic into the cluster, and the Service Mesh manages how that traffic flows between your .NET microservices once inside.

47. How do you manage different environments (dev, test, prod) for .NET applications in Kubernetes?

Answer:

  • Namespaces: Create separate namespaces for each environment (e.g., dev, test, prod) within the same cluster. This provides logical isolation.

  • Separate Clusters: For stricter isolation, use entirely separate Kubernetes clusters for production and non-production environments.

  • Helm Charts with Values Files: Use Helm charts with different values.yaml files for each environment to apply environment-specific configurations (e.g., replica counts, database connection strings, resource limits).

  • Git Branches: Use different Git branches for environment-specific configurations, managed by GitOps.

48. What are Custom Resource Definitions (CRDs) in Kubernetes and how might they be used with .NET?

Answer: CRDs allow you to extend the Kubernetes API by defining your own custom resource types. Once a CRD is defined, you can create and manage custom objects in the cluster using kubectl. Use with .NET:

  • Operator Pattern: You could write a Kubernetes Operator (a custom controller, potentially in .NET using the Kubernetes .NET client library) that watches for instances of your custom resource and takes actions to manage a .NET application or related infrastructure based on its state.

  • Application-Specific Configuration: Define CRDs for complex, application-specific configurations that your .NET microservices need.

49. Describe a typical CI/CD pipeline for a .NET application on Kubernetes in Azure/AWS.

Answer:

  1. Source Stage: Code committed to Git (Azure Repos, GitHub, AWS CodeCommit).

  2. Build Stage (CI):

    • Triggered by commit.

    • CI tool (Azure DevOps Pipelines, GitHub Actions, AWS CodeBuild, Jenkins) runs dotnet restore, dotnet build, dotnet test.

    • Builds Docker image for .NET app.

    • Scans Docker image for vulnerabilities.

    • Pushes Docker image to container registry (ACR/ECR).

  3. Release/Deployment Stage (CD):

    • Triggered by successful image push.

    • Updates Kubernetes manifest (e.g., Deployment YAML) with the new image tag.

    • Applies manifest to target Kubernetes cluster (AKS/EKS) using kubectl or Helm.

    • Runs automated integration/end-to-end tests against the deployed application.

    • Implements progressive delivery strategies (rolling updates, canary, blue/green).

    • Monitors deployment health and rolls back if necessary.

50. What is the importance of Kubernetes readiness and liveness probes for zero-downtime deployments of .NET apps?

Answer:

  • Readiness Probes: Crucial for zero-downtime. They prevent traffic from being sent to a newly deployed or restarted .NET Pod until it's fully initialized, has loaded all dependencies, and is ready to process requests. This avoids HTTP 503 errors during deployments.

  • Liveness Probes: Ensure long-running stability. If a .NET app gets into a bad state (e.g., memory leak, deadlock) and stops responding, the liveness probe will fail, and Kubernetes will restart the Pod, bringing it back to a healthy state without manual intervention. Together, they ensure that only healthy and ready instances of your .NET application are serving traffic at any given time, which is fundamental for high availability and zero-downtime deployments.

No comments: