Thursday, July 24, 2025

Azurite: Local Azure Storage Emulator for Development and Testing

 

Introduction

Developing applications that interact with Azure Storage (Blobs, Queues, and Tables) often requires a robust and efficient way to test these interactions without incurring costs or dependencies on actual cloud resources during the development cycle. This is where Azurite comes in. Azurite is an open-source, cross-platform local emulator for Azure Storage, providing a free and fast environment to test your storage-related code locally.

With References :

What is Azurite?

Azurite is a lightweight server that emulates the Azure Blob, Queue, and Table storage services on your local machine. It is designed to be highly compatible with the Azure Storage APIs, meaning the code you write to interact with Azure Storage in the cloud will generally work seamlessly with Azurite locally, requiring only a change in the connection string.

It's built on Node.js and can be easily installed via npm or run as a Docker container. Azurite is the recommended local emulator for Azure Storage development, replacing the older Azure Storage Emulator.

Key Features of Azurite

Azurite offers several features that make it an indispensable tool for Azure developers:

  1. API Compatibility: Azurite implements the same APIs as Azure Storage, ensuring that your application code behaves consistently whether it's talking to the local emulator or the actual Azure services. This includes support for REST APIs and various Azure Storage SDKs (e.g., .NET, Python, Java, JavaScript).

  2. Cross-Platform Support: Being Node.js-based, Azurite runs on Windows, macOS, and Linux, making it accessible to developers across different operating systems.

  3. Blob Storage Emulation:

    • Supports Block Blobs, Page Blobs, and Append Blobs.

    • Emulates common blob operations like upload, download, delete, list, and setting metadata.

    • Handles blob tiers (Hot, Cool, Archive) for development purposes.

  4. Queue Storage Emulation:

    • Provides local emulation for Azure Queue Storage.

    • Supports operations such as adding messages, peeking messages, getting messages, updating messages, and deleting messages.

    • Useful for testing asynchronous message processing workflows.

  5. Table Storage Emulation:

    • Emulates Azure Table Storage, a NoSQL key-attribute store.

    • Supports CRUD (Create, Read, Update, Delete) operations on entities.

    • Ideal for testing applications that use structured non-relational data.

  6. HTTPS Support: Azurite supports HTTPS, allowing you to test your application with secure connections, mirroring a production environment more closely. You can generate self-signed certificates for this purpose.

  7. Persistence: Azurite can persist data to a local disk, meaning your data will remain available even after you restart the emulator. This is crucial for maintaining state during development and testing sessions.

  8. Configurability: It offers various configuration options, such as specifying the listening ports, the data persistence location, and enabling debug logs.

  9. Integration with Azure Tools: Azurite integrates well with other Azure development tools, including Azure Storage Explorer, which can connect to your local Azurite instance to view and manage emulated storage resources.

Benefits for Developers

  • Cost Savings: No charges for storage transactions or data egress during development.

  • Offline Development: Work on storage-dependent features without an internet connection.

  • Faster Iteration: Local operations are significantly faster than cloud operations, leading to quicker development cycles.

  • Isolated Testing: Create isolated test environments without affecting production or shared development resources.

  • Consistent Environment: Ensures that your development environment closely mirrors the production Azure Storage environment.

Local Automated Testing Scenario for C#.NET / Python Developer

This section outlines a practical scenario for using Azurite in an automated testing setup for both C#.NET and Python developers.

Use Case: Simple File Upload and Retrieval

Imagine you are developing a service that allows users to upload files (e.g., images, documents) and later retrieve them. This service interacts with Azure Blob Storage. For local development and automated testing, you want to ensure that the file upload and download functionality works correctly without deploying to Azure.

General Setup for Azurite

Before diving into language-specific scenarios, ensure you have Azurite installed and running:

  1. Install Node.js: If you don't have it, download and install Node.js from nodejs.org.

  2. Install Azurite: Open a terminal or command prompt and run:

    npm install -g azurite
    
  3. Start Azurite: In your terminal, start Azurite. By default, it will listen on standard ports (10000 for Blobs, 10001 for Queues, 10002 for Tables).

    azurite --silent
    

    (The --silent flag keeps the output minimal. You can also specify --location <path> for data persistence or --debug <path> for logs).

    Azurite will now be running and accessible via the default connection string: DefaultEndpointsProtocol=http;AccountName=devstoreaccount1;AccountKey=Eby8vdM02xNOcqFlqUwJPLlmEtlCDXJ1AUkADrMYqOZqFVvKF2jtazHGHPZ Seven/bAZUqV/pHH/FUyCemPNrsP/IqX/DzKSExY+IT+/g==;BlobEndpoint=http://127.0.0.1:10000/devstoreaccount1;QueueEndpoint=http://127.0.0.1:10001/devstoreaccount1;TableEndpoint=http://127.0.0.1:10002/devstoreaccount1;

    For testing, you often use a simplified connection string that points directly to the local Azurite instance: UseDevelopmentStorage=true

C#.NET Automated Testing Scenario

Prerequisites:

  • .NET SDK installed.

  • Azure.Storage.Blobs NuGet package installed in your project.

  • A testing framework like NUnit or xUnit.

Conceptual Code (Service Layer):

// MyBlobService.cs
using Azure.Storage.Blobs;
using Azure.Storage.Blobs.Models;
using System.IO;
using System.Threading.Tasks;

public class MyBlobService
{
    private readonly BlobServiceClient _blobServiceClient;
    private readonly string _containerName;

    public MyBlobService(string connectionString, string containerName)
    {
        _blobServiceClient = new BlobServiceClient(connectionString);
        _containerName = containerName;
    }

    public async Task UploadFileAsync(string blobName, Stream content)
    {
        BlobContainerClient containerClient = _blobServiceClient.GetBlobContainerClient(_containerName);
        await containerClient.CreateIfNotExistsAsync(); // Ensure container exists for testing

        BlobClient blobClient = containerClient.GetBlobClient(blobName);
        await blobClient.UploadAsync(content, overwrite: true);
    }

    public async Task<Stream> DownloadFileAsync(string blobName)
    {
        BlobContainerClient containerClient = _blobServiceClient.GetBlobContainerClient(_containerName);
        BlobClient blobClient = containerClient.GetBlobClient(blobName);

        if (!await blobClient.ExistsAsync())
        {
            return null; // Or throw an exception
        }

        MemoryStream downloadStream = new MemoryStream();
        await blobClient.DownloadToAsync(downloadStream);
        downloadStream.Position = 0; // Reset stream position for reading
        return downloadStream;
    }

    public async Task DeleteContainerAsync()
    {
        BlobContainerClient containerClient = _blobServiceClient.GetBlobContainerClient(_containerName);
        await containerClient.DeleteIfExistsAsync();
    }
}

Automated Test (NUnit Example):

// MyBlobServiceTests.cs
using NUnit.Framework;
using System.IO;
using System.Text;
using System.Threading.Tasks;

[TestFixture]
public class MyBlobServiceTests
{
    private MyBlobService _blobService;
    private const string ConnectionString = "UseDevelopmentStorage=true"; // Points to Azurite
    private const string TestContainerName = "test-files-container";

    [SetUp]
    public async Task Setup()
    {
        // Initialize service for each test
        _blobService = new MyBlobService(ConnectionString, TestContainerName);
        // Clean up any previous test data to ensure test isolation
        await _blobService.DeleteContainerAsync();
    }

    [TearDown]
    public async Task Teardown()
    {
        // Clean up after each test
        await _blobService.DeleteContainerAsync();
    }

    [Test]
    public async Task UploadAndDownloadFile_ShouldSucceed()
    {
        // Arrange
        string blobName = "mytestfile.txt";
        string fileContent = "Hello, Azurite!";
        using MemoryStream uploadStream = new MemoryStream(Encoding.UTF8.GetBytes(fileContent));

        // Act
        await _blobService.UploadFileAsync(blobName, uploadStream);
        using Stream downloadedStream = await _blobService.DownloadFileAsync(blobName);

        // Assert
        Assert.IsNotNull(downloadedStream);
        using StreamReader reader = new StreamReader(downloadedStream);
        string downloadedContent = await reader.ReadToEndAsync();
        Assert.AreEqual(fileContent, downloadedContent);
    }

    [Test]
    public async Task DownloadNonExistentFile_ShouldReturnNull()
    {
        // Arrange
        string blobName = "nonexistentfile.txt";

        // Act
        using Stream downloadedStream = await _blobService.DownloadFileAsync(blobName);

        // Assert
        Assert.IsNull(downloadedStream);
    }

    // Add more tests for edge cases, large files, different blob types, etc.
}

Explanation:

  • The ConnectionString is set to "UseDevelopmentStorage=true", which automatically directs the Azure Storage SDK to connect to the local Azurite instance.

  • [SetUp] and [TearDown] methods are used to ensure a clean state for each test by deleting and recreating the container. This is crucial for isolated and repeatable automated tests.

  • The test uploads a file, then downloads it, and asserts that the content matches, verifying the end-to-end flow.

Python Automated Testing Scenario

Prerequisites:

  • Python installed.

  • azure-storage-blob package installed (pip install azure-storage-blob).

  • A testing framework like pytest or unittest.

Conceptual Code (Service Layer):

# my_blob_service.py
from azure.storage.blob import BlobServiceClient, BlobClient, ContainerClient
import os

class MyBlobService:
    def __init__(self, connection_string, container_name):
        self.blob_service_client = BlobServiceClient.from_connection_string(connection_string)
        self.container_name = container_name

    def upload_file(self, blob_name, content_bytes):
        container_client = self.blob_service_client.get_container_client(self.container_name)
        container_client.create_container(exists_ok=True) # Ensure container exists for testing

        blob_client = container_client.get_blob_client(blob_name)
        blob_client.upload_blob(content_bytes, overwrite=True)

    def download_file(self, blob_name):
        container_client = self.blob_service_client.get_container_client(self.container_name)
        blob_client = container_client.get_blob_client(blob_name)

        if not blob_client.exists():
            return None

        download_stream = blob_client.download_blob()
        return download_stream.readall()

    def delete_container(self):
        container_client = self.blob_service_client.get_container_client(self.container_name)
        container_client.delete_container(error_on_default_enum_value=False) # delete_container can raise error if not exists

Automated Test (Pytest Example):

# test_my_blob_service.py
import pytest
from my_blob_service import MyBlobService
import os

# Connection string for Azurite
CONNECTION_STRING = "DefaultEndpointsProtocol=http;AccountName=devstoreaccount1;AccountKey=Eby8vdM02xNOcqFlqUwJPLlmEtcCDXJ1AUkADrMYqOZqFVvKF2jtazHGHPZ Seven/bAZUqV/pHH/FUyCemPNrsP/IqX/DzKSExY+IT+/g==;BlobEndpoint=http://127.0.0.1:10000/devstoreaccount1;"
TEST_CONTAINER_NAME = "python-test-files-container"

@pytest.fixture(scope="function")
def blob_service_fixture():
    """Provides a MyBlobService instance and cleans up the container before/after each test."""
    service = MyBlobService(CONNECTION_STRING, TEST_CONTAINER_NAME)
    # Clean up before the test
    service.delete_container()
    yield service
    # Clean up after the test
    service.delete_container()

def test_upload_and_download_file_should_succeed(blob_service_fixture):
    # Arrange
    blob_name = "my_python_test_file.txt"
    file_content = "Hello from Python Azurite!"
    content_bytes = file_content.encode('utf-8')

    # Act
    blob_service_fixture.upload_file(blob_name, content_bytes)
    downloaded_content_bytes = blob_service_fixture.download_file(blob_name)

    # Assert
    assert downloaded_content_bytes is not None
    assert downloaded_content_bytes.decode('utf-8') == file_content

def test_download_non_existent_file_should_return_none(blob_service_fixture):
    # Arrange
    blob_name = "non_existent_file.txt"

    # Act
    downloaded_content = blob_service_fixture.download_file(blob_name)

    # Assert
    assert downloaded_content is None

# To run these tests:
# 1. Save the code above as my_blob_service.py and test_my_blob_service.py in the same directory.
# 2. Ensure Azurite is running (`azurite --silent`).
# 3. Open your terminal in that directory and run: `pytest`

Explanation:

  • For Python, the full Azurite connection string is often used, or you can configure environment variables like AZURE_STORAGE_CONNECTION_STRING to UseDevelopmentStorage=true.

  • pytest.fixture is used to set up and tear down the test environment (creating and deleting the container) for each test function, ensuring test isolation.

  • The tests follow a similar pattern: arrange data, act on the service, and assert the expected outcome.

Automated Testing Principles with Azurite

  • Test Isolation: Always ensure that each test runs in a clean, isolated environment. For Azurite, this typically means creating a unique container or clearing data before and after each test.

  • Mocking vs. Emulation: Azurite provides emulation, which is more robust than mocking the Azure Storage SDK directly. Emulation tests the actual interaction with a storage service (albeit local), catching more integration issues.

  • CI/CD Integration: Azurite can be easily integrated into CI/CD pipelines by running it as a Docker container. This allows your automated tests to run against a local storage emulator as part of your build process.

  • Real-world Scenarios: Design your tests to cover common scenarios (upload, download, delete, list), edge cases (empty files, large files, non-existent files), and error handling.

Conclusion

Azurite is an invaluable tool for any developer working with Azure Storage. It provides a reliable, cost-effective, and fast local environment for developing and thoroughly testing your applications. By integrating Azurite into your development and automated testing workflows, you can significantly improve your productivity and the quality of your Azure-dependent applications.

Monday, July 14, 2025

Today's Topic - Docker Container for a template of Node.js App.

 

Image for a simple Node.js App.

This post explains the process of taking a Dockerfile, building a Docker image from it, and then running a Docker container based on that image.

Prerequisites

Before you begin, ensure you have Docker installed and running on your system:

  • Docker Desktop: For Windows and macOS users, download and install Docker Desktop from the official Docker website.

  • Docker Engine: For Linux users, follow the installation instructions for Docker Engine on the official Docker documentation.

Verify Docker is running by opening your terminal or command prompt and typing:

docker --version

Step 1: Create Your Dockerfile and Application Files

A Dockerfile is a text file that contains all the commands a user could call on the command line to assemble an image. It's typically named Dockerfile (without any file extension).

Let's create a simple Node.js application to demonstrate.

  1. Create a new directory for your project:
    mkdir my-docker-app
    cd my-docker-app

  2. Create the Dockerfile inside my-docker-app with the following content:
    # Use an official Node.js runtime as a parent image.
    # 'alpine' variants are smaller and more secure.
    FROM node:18-alpine

    # Set the working directory inside the container.
    # All subsequent commands will run from this directory.
    WORKDIR /app

    # Copy package.json and package-lock.json first.
    # This allows Docker to cache this layer if dependencies don't change,
    # speeding up subsequent builds.
    COPY package*.json ./

    # Install application dependencies.
    RUN npm install

    # Copy the rest of the application code into the container's working directory.
    COPY . .

    # Expose port 3000 to the outside world.
    # This informs Docker that the container listens on this port at runtime.
    EXPOSE 3000

    # Define the command to run the application when the container starts.
    # This uses the 'exec' form, which is recommended.
    CMD [ "node", "app.js" ]

  3. Create app.js (your Node.js application) in the same my-docker-app directory:
    const http = require('http');

    const hostname = '0.0.0.0'; // Listen on all network interfaces
    const port = 3000;         // Port inside the container

    const server = http.createServer((req, res) => {
      res.statusCode = 200;
      res.setHeader('Content-Type', 'text/plain');
      res.end('Hello from Docker!\n');
    });

    server.listen(port, hostname, () => {
      console.log(`Server running at http://${hostname}:${port}/`);
    });

  4. Create package.json (Node.js project configuration) in the my-docker-app directory:
    {
      "name": "my-docker-app",
      "version": "1.0.0",
      "description": "A simple Node.js app for Docker demonstration",
      "main": "app.js",
      "scripts": {
        "start": "node app.js"
      },
      "keywords": [],
      "author": "",
      "license": "ISC"
    }

Your project directory structure should now look like this:

my-docker-app/
├── Dockerfile
├── app.js
└── package.json

Step 2: Build the Docker Image

With your Dockerfile and application files in place, you can now build your Docker image. This process reads the Dockerfile instructions and creates a runnable image.

  1. Open your terminal or command prompt.

  2. Navigate to your project directory (my-docker-app):
    cd my-docker-app

  3. Run the Docker build command:
    docker build -t my-node-app .

  • docker build: The command to build a Docker image.

  • -t my-node-app: This tags your image with a human-readable name (my-node-app). It's good practice to use lowercase and descriptive names. You can also add a version tag (e.g., my-node-app:1.0).

  • .: This specifies the "build context" as the current directory. Docker will look for the Dockerfile in this location and send all files in this directory to the Docker daemon for the build process.

You will see output detailing each step as Docker executes the instructions in your Dockerfile. Upon successful completion, you'll see a message similar to:Successfully built <image_id>
Successfully tagged my-node-app:latest

  1. Verify the image was built:
    You can list all local Docker images to confirm your new image is present:
    docker images

    You should see my-node-app listed in the output.

Step 3: Run a Docker Container from the Image

Now that you have a Docker image, you can create and run a container from it. A container is a runnable instance of an image.

  1. Run the Docker container:
    docker run -p 4000:3000 my-node-app

  • docker run: The command to create and run a new container.

  • -p 4000:3000: This is crucial for accessing your application. It maps port 4000 on your host machine to port 3000 inside the Docker container. Your Node.js application is listening on port 3000 within the container (as defined by EXPOSE 3000 and app.js).

  • my-node-app: The name of the image you want to run.

Your terminal will now display the output from your Node.js application, similar to:Server running at http://0.0.0.0:3000/

This screen run example is with 8000 port !!

Step 4: Access Your Application

With the container running, open your web browser and navigate to:

http://localhost:4000

You should see the message: "Hello from Docker!"

Step 5: Stop and Remove Docker Resources (Optional)

Stopping a Running Container

  • If running in the foreground (as in Step 3): Go back to the terminal where the container is running and press Ctrl+C. This will stop the container.

  • If running in detached mode (background): For long-running applications, you'll often want to run the container in the background. You can do this by adding the -d (detached) flag:
    docker run -d -p 4000:3000 my-node-app

    To stop a detached container, you first need its container ID or name. List running containers:
    docker ps

    Note the CONTAINER ID (e.g., a1b2c3d4e5f6) or NAMES (e.g., vigilant_williams). Then stop it:
    docker stop <CONTAINER_ID_OR_NAME>

Removing Containers

Stopped containers still consume disk space. To remove a stopped container:

docker rm <CONTAINER_ID_OR_NAME>

You can also remove all stopped containers:

docker container prune

Removing Images

If you no longer need a Docker image, you can remove it to free up disk space:

docker rmi my-node-app

If the image is being used by a container, you'll need to stop and remove that container first.


Protocols used over AI Computing

 The field of AI computing relies on a range of communication protocols, from low-level standards that move data between processors to high-...