Fake it until you make it: using custom HttpClientHandler to emulate a client server architecture

Fake it until you make it: using custom HttpClientHandler to emulate a client server architecture

Last week, I decided to create a playground for the SyncFramework to demonstrate how synchronization works. The sync framework itself is not designed in a client-server architecture, but as a set of APIs that you can use to synchronize data.

Synchronization scenarios usually involve a client-server architecture, but when I created the SyncFramework, I decided that network communication was something outside the scope and not directly related to data synchronization. So, instead of embedding the client-server concept in the SyncFramework, I decided to create a set of extensions to handle these scenarios. If you want to take a look at the network extensions, you can see them here.

Now, let’s return to the playground. The main requirement for me, besides showing how the synchronization process works, was not having to maintain an infrastructure for it. You know, a Sync Server and a few databases that I would have to constantly delete. So, I decided to use Blazor WebAssembly and SQLite databases running in the browser. If you want to know more about how SQLite databases can run in the browser, take a look at this article.

Now, there’s still a problem. How do I run a server on the browser? I know it’s somehow possible, but I did not have the time to do the research. So, I decided to create my own HttpClientHandler.

How the HttpClientHandler works

HttpClientHandler offers a number of attributes and methods for controlling HTTP requests and responses. It serves as the fundamental mechanism for HttpClient’s ability to send and receive HTTP requests and responses.

The HttpClientHandler manages aspects like the maximum number of redirects, redirection policies, handling cookies, and automated decompression of HTTP traffic. It can be set up and supplied to HttpClient to regulate the HTTP requests made by HttpClient.

HttpClientHandler might be helpful in testing situations when it’s necessary to imitate or mock HTTP requests and responses. The SendAsync method of HttpMessageHandler, from which HttpClientHandler also descended, can be overridden in a new class to deliver any response you require for your test.

here is a basic example

public class TestHandler : HttpMessageHandler
{
    protected override async Task<HttpResponseMessage> SendAsync(HttpRequestMessage request, CancellationToken cancellationToken)
    {
        // You can check the request details and return different responses based on that.
        // For simplicity, we're always returning the same response here.
        var responseMessage = new HttpResponseMessage(HttpStatusCode.OK)
        {
            Content = new StringContent("Test response.")
        };
        return await Task.FromResult(responseMessage);
    }
}

And here’s how you’d use this handler in a test:

[Test]
public async Task TestHttpClient()
{
    var handler = new TestHandler();
    var client = new HttpClient(handler);

    var response = await client.GetAsync("http://example.com");
    var responseContent = await response.Content.ReadAsStringAsync();

    Assert.AreEqual("Test response.", responseContent);
}

The TestHandler in this illustration consistently sends back an HTTP 200 response with the body “Test response.” In a real test, you might use SendAsync with more sophisticated logic to return several responses depending on the specifics of the request. By doing so, you may properly test your code’s handling of different answers without actually sending HTTP queries.

Going back to our main story

Now that we know we can catch the HTTP request and handle it locally, we can write an HttpClientHandler that takes the request from the client nodes and processes them locally. Now, we have all the pieces to make the playground work without a real server. You can take a look at the implementation of the custom handler for the playground here

Until next time, happy coding )))))

 

 

 

 

 

 

 

Alchemy Framework: 2 – Repository Structure

Alchemy Framework: 2 – Repository Structure

Alright, it’s time to start writing some code, but first, let’s decide how this project will be organized.

So far, the repository structure that I’ve found most appealing is the one I used for the SyncFramework (https://github.com/egarim/SyncFramework). Here is a representation with bullet points:

  • Repo Folder: This is the parent folder that will contain all the code from our project. o Git Files: These are Ignore and attributes files.
    • Clean.bat: This is a batch file that deletes all child BIN and OBJ folders to ensure our repository does not contain binary files (sometimes the ‘clean’ command from Visual Studio does not clear the outputs completely).
    • CHANGES.MD: This is a Github markdown file that contains the history of changes in the code.
    • README.MD: This is the landing page of the current repository where we can write some basic instructions and other important information about the project.
    • src (folder): This is where the actual code resides.
      • Directory.Build.props: This is a special file name recognized by the .NET Core’s MSBuild system. When MSBuild runs, it automatically imports this file from the current directory and all parent directories. It is typically used for sharing common properties across multiple projects in a solution.
      • Alchemy.Net.Core: This contains a class library where all the interfaces, abstract classes, and base implementations reside.
      • Tests.Alchemy.Net.Core: This is the NUnit test project for AlchemyDotNet.Core. I typically write integration tests where I examine the use cases for the library, rather than unit tests.
      • Alchemy.Net.sln: This is the main solution file. We will eventually create solution filter files for the rest of the projects.
      • Examples: These are technical examples of how to use the library.

Before I conclude this post, I want to discuss versioning. The main idea here is this: the project AlchemyDotNet.Core will start at version 1.0.0, and the version will change when there is a fix or an update to the core specification. We will only move to a new version like 2.0.0 when the specification introduces breaking changes. This means that we will be on version 1 for a very long time.

Link to the repo

https://github.com/egarim/Alchemy.Net

Previous posts

Alchemy Framework: 1 – Creating a framework for import data

 

P.O.U.N.D stack (Postgres, Oqtane, Ubuntu & DotNet)

P.O.U.N.D stack (Postgres, Oqtane, Ubuntu & DotNet)

A stack in software development refers to a collection of technologies, tools, and frameworks that are used together to build and run a complete application or solution. A typical stack consists of components that handle different aspects of the software development process, including frontend, backend, databases, and sometimes even the hosting environment.

A stack is often categorized into different layers based on the functionality they provide:

  1. Frontend: This layer is responsible for the user interface (UI) and user experience (UX) of an application. It consists of client-side technologies like HTML, CSS, and JavaScript, as well as libraries or frameworks such as React, Angular, or Vue.js.
  2. Backend: This layer handles the server-side logic, processing user requests, and managing interactions with databases and other services. Backend technologies can include programming languages like Python, Ruby, Java, or PHP, and frameworks like Django, Ruby on Rails, or Spring.
  3. Database: This layer is responsible for storing and managing the application’s data. Databases can be relational (e.g., MySQL, PostgreSQL, or Microsoft SQL Server) or NoSQL (e.g., MongoDB, Cassandra, or Redis), depending on the application’s data structure and requirements.
  4. Hosting Environment: This layer refers to the infrastructure where the application is deployed and run. It can include on-premises servers, cloud-based platforms like Amazon Web Services (AWS), Google Cloud Platform (GCP), or Microsoft Azure, or container orchestration platforms like Kubernetes or Docker Swarm.

Developers often refer to specific combinations of these technologies as named stacks. Some examples include:

  1. LAMP: Linux (operating system), Apache (web server), MySQL (database), and PHP (backend programming language).
  2. MEAN: MongoDB (database), Express.js (backend framework), Angular (frontend framework), and Node.js (runtime environment).
  3. MERN: MongoDB (database), Express.js (backend framework), React (frontend library), and Node.js (runtime environment).

Selecting a stack depends on factors such as project requirements, team expertise, performance, and scalability needs. By using a well-defined stack, developers can streamline the development process, improve collaboration, and ensure that all components work together efficiently.

The P.O.U.N.D. Stack is an innovative software development stack that combines Postgres, Oqtane, Ubuntu, and DotNet to create powerful, modern, and scalable applications. This stack is designed to leverage the strengths of each technology, providing developers with an integrated and efficient environment for building web applications.

  1. Postgres (P): As the database layer, Postgres offers robust performance, scalability, and support for advanced data types, such as GIS and JSON. Its open-source nature and active community make it a reliable choice for handling the storage and management of application data.
  2. Oqtane (O): Serving as the frontend framework, Oqtane is built on top of the Blazor technology in .NET, allowing for the creation of modern, responsive, and feature-rich user interfaces. With Oqtane, developers can create modular and extensible applications, while also benefiting from built-in features such as authentication, authorization, and multi-tenancy.
  3. Ubuntu (U): As the operating system and hosting environment, Ubuntu provides a stable, secure, and easy-to-use platform for deploying and running applications. It is widely supported and offers excellent compatibility with a variety of hardware and cloud platforms, making it an ideal choice for hosting P.O.U.N.D. Stack applications.
  4. DotNet (D): The backend layer is powered by the .NET framework, which offers a versatile and high-performance environment for server-side development. With support for multiple programming languages (such as C#, F#, and VB.NET), powerful libraries, and a large ecosystem, .NET allows developers to build scalable and efficient backend logic for their applications.

In summary, the P.O.U.N.D. Stack brings together the power of Postgres, Oqtane, Ubuntu, and DotNet to deliver a comprehensive and efficient development stack. By leveraging the unique capabilities of each technology, developers can build modern, scalable, and high-performance web applications that cater to diverse business needs.

5 Good Practices for Integration Testing with NUnit

5 Good Practices for Integration Testing with NUnit

Integration tests are a crucial part of any software development process, as they help ensure that different parts of a system are working together correctly. When writing integration tests, it is important to follow best practices in order to ensure that your tests are effective and maintainable. Here are a few good practices for integration testing using NUnit:

  1. Use test fixtures: NUnit provides a concept called “test fixtures,” which allow you to set up and tear down common resources that are needed by multiple test cases. This can help reduce duplication and make your tests more maintainable.
    [TestFixture]
    public class DatabaseTests
    {
        private Database _database;
    
        [SetUp]
        public void SetUp()
        {
            _database = new Database();
        }
    
        [TearDown]
        public void TearDown()
        {
            _database.Dispose();
        }
    
        [Test]
        public void Test1()
        {
            // test code goes here
        }
    
        [Test]
        public void Test2()
        {
            // test code goes here
        }
    }
    

     

  2. Use setup and teardown methods: In addition to test fixtures, NUnit also provides setup and teardown methods that can be used to perform common tasks before and after each test case. This can be helpful for setting up test data or cleaning up after a test.
    [TestFixture]
    public class DatabaseTests
    {
        private Database _database;
    
        [SetUp]
        public void SetUp()
        {
            _database = new Database();
        }
    
        [TearDown]
        public void TearDown()
        {
            _database.Dispose();
        }
    
        [SetUp]
        public void TestSetup()
        {
            // setup code goes here
        }
    
        [TearDown]
        public void TestTeardown()
        {
            // teardown code goes here
        }
    
        [Test]
        public void Test1()
        {
            // test code goes here
        }
    
        [Test]
        public void Test2()
        {
            // test code goes here
        }
    }
    

     

  3. Use test cases: NUnit allows you to specify multiple test cases for a single test method using the TestCase attribute. This can help reduce duplication and make it easier to test different scenarios.
    [TestFixture]
    public class CalculatorTests
    {
        [TestCase(1, 2, 3)]
        [TestCase(10, 20, 30)]
        [TestCase(-1, -2, -3)]
        public void TestAdd(int x, int y, int expected)
        {
            Calculator calculator = new Calculator();
            int result = calculator.Add(x, y);
            Assert.AreEqual(expected, result);
        }
    }
    

     

  4. Use the Assert class: NUnit provides a variety of assertions that can be used to verify the behavior of your code. It is important to use these assertions rather than manually checking for expected results, as they provide better error messages and make it easier to debug test failures.
    [TestFixture]
    public class CalculatorTests
    {
        [Test]
        public void TestAdd()
        {
            Calculator calculator = new Calculator();
            int result = calculator.Add(1, 2);
            Assert.AreEqual(3, result);
        }
    
        [Test]
        public void TestSubtract()
        {
            Calculator calculator = new Calculator();
            int result = calculator.Subtract(10, 5);
            Assert.AreEqual(5, result);
        }
    }
    

     

  5. Use test categories: NUnit allows you to categorize your tests using the Category attribute. This can be helpful for organizing your tests and selectively running only certain categories of tests.
    [TestFixture]
    public class DatabaseTests
    {
        [Test]
        [Category("Database")]
        public void Test1()
        {
            // test code goes here
        }
    
        [Test]
        [Category("Database")]
        public void Test2()
        {
            // test code goes here
        }
    
        [Test]
        [Category("API")]
        public void Test3()
        {
            // test code goes here
        }
    }
       
    

     

By following these best practices, you can write integration tests that are effective, maintainable, and easy to understand. This will help you ensure that your code is working correctly and reduce the risk of regressions as you continue to develop and evolve your system.

Moving to apple silicon as a DotNet Developer

Moving to apple silicon as a DotNet Developer

ARM (Advanced RISC Machine) is a popular architecture for mobile devices and other low-power devices. Microsoft has supported ARM architectures in the .NET framework for many years, and this support has continued with the release of .NET 6 and .NET 7.

In .NET 6 and 7, support for ARM architectures has been improved and expanded in several ways. One of the key changes is the introduction of ARM64 JIT (Just-In-Time) compilation, which allows .NET applications to take advantage of the performance improvements offered by the ARM64 architecture. This means that .NET applications can now be compiled and run natively on ARM64 devices, providing better performance and a more seamless experience for users.

Another important change in .NET 6 and 7 is the support for ARM32 and ARM64 for ASP.NET and ASP.NET Core. This means that developers can now build and deploy web applications on ARM devices, making it easier to create cross-platform applications that can run on a wide range of devices.

In addition to these changes, .NET 6 and 7 also include support for ARM64 in the .NET Native toolchain, which allows developers to build native applications for ARM devices using C# and .NET. This makes it easier to create high-performance, native applications for ARM devices without having to write code in a different language.

In conclusion, the support for ARM architectures in .NET 6 and 7 is an important development for developers who are looking to create and deploy applications on devices such as Apple’s M1 and M2. With this support, developers can take advantage of the performance and capabilities of the ARM architecture to create powerful and efficient applications that can run on a variety of devices. This will make it easier for developers to create and deploy applications on a wide range of devices, including mobile devices and other low-power devices. Overall, the support for ARM architectures in .NET 6 and 7 is a major improvement that will help developers create and deploy high-quality applications on a variety of devices.

How to wrap your synchronous implementation in an asynchronous implementation.

How to wrap your synchronous implementation in an asynchronous implementation.

In this article, we will be discussing why it is sometimes useful to wrap your synchronous implementation in an asynchronous implementation.

Introduction

Async programming is an important paradigm in modern software development, allowing you to perform long-running tasks without blocking the calling thread. Async programming is particularly useful in scenarios where the operation may take a long time to complete, or when the operation is interacting with a slow resource, such as a network or a database.

One common scenario where you may need to wrap your synchronous implementation in an asynchronous implementation is when you are working with an API or a library that does not provide async versions of its methods. In these cases, you can use the Task.Run method to wrap the synchronous methods in a task, allowing you to use the await keyword to asynchronously wait for the operation to complete.

Example: Wrapping a Synchronous Data Processor

To illustrate this concept, let’s consider the following synchronous IDataProcessor interface:

public interface IDataProcessor
{
    void ProcessData(IEnumerable<IData> data);
}

This interface has a single method, ProcessData, which takes an IEnumerable of IData objects as input and processes the data.

Now let’s say that you want to use this IDataProcessor interface in an async context, but the interface does not provide an async version of the ProcessData method. To use this interface asynchronously, you can create an async wrapper class that wraps the synchronous implementation in an async implementation.

Here is an example of how you can wrap the synchronous IDataProcessor implementation in an asynchronous implementation:

public class AsyncDataProcessor : IDataProcessor
{
    private readonly IDataProcessor _dataProcessor;

    public AsyncDataProcessor(IDataProcessor dataProcessor)
    {
        _dataProcessor = dataProcessor;
    }

    public Task ProcessDataAsync(IEnumerable<IData> data)
    {
        return Task.Run(() => _dataProcessor.ProcessData(data));
    }
}

This implementation has a single method, ProcessDataAsync, which takes an IEnumerable of IData objects as input and asynchronously processes the data. The implementation uses the Task.Run method to wrap the synchronous ProcessData method in a task, allowing it to be called asynchronously using the await keyword.

To use this implementation, you can simply create an instance of AsyncDataProcessor and call the ProcessDataAsync method, passing in the list of data as an argument. For example:

var dataProcessor = new AsyncDataProcessor(new DataProcessor());
await dataProcessor.ProcessDataAsync(data);

This code creates an instance of the AsyncDataProcessor class and calls the ProcessDataAsync method, passing in the data object as an argument. The await keyword is used to asynchronously wait for the data processing to complete.

Conclusion

In this article, we discussed why it is sometimes useful to wrap your synchronous implementation in an asynchronous implementation. We used the Task.Run method to wrap a synchronous IDataProcessor implementation in an async implementation, allowing us to use the await keyword