Design Patterns for Library Creators in Dotnet

Design Patterns for Library Creators in Dotnet

Hello there! Today, we’re going to delve into the fascinating world of design patterns. Don’t worry if you’re not a tech whiz – we’ll keep things simple and relatable. We’ll use the SyncFramework as an example, but our main focus will be on the design patterns themselves. So, let’s get started!

What are Design Patterns?

Design patterns are like blueprints – they provide solutions to common problems that occur in software design. They’re not ready-made code that you can directly insert into your program. Instead, they’re guidelines you can follow to solve a particular problem in a specific context.

SOLID Design Principles

One of the most popular sets of design principles is SOLID. It’s an acronym that stands for five principles that help make software designs more understandable, flexible, and maintainable. Let’s break it down:

  1. Single Responsibility Principle: A class should have only one reason to change. In other words, it should have only one job.
  2. Open-Closed Principle: Software entities should be open for extension but closed for modification. This means we should be able to add new features or functionality without changing the existing code.
  3. Liskov Substitution Principle: Subtypes must be substitutable for their base types. This principle is about creating new derived classes that can replace the functionality of the base class without breaking the application.
  4. Interface Segregation Principle: Clients should not be forced to depend on interfaces they do not use. This principle is about reducing the side effects and frequency of required changes by splitting the software into multiple, independent parts.
  5. Dependency Inversion Principle: High-level modules should not depend on low-level modules. Both should depend on abstractions. This principle allows for decoupling.

Applying SOLID Principles in SyncFramework

The SyncFramework is a great example of how these principles can be applied. Here’s how:

  • Single Responsibility Principle: Each component of the SyncFramework has a specific role. For instance, one component is responsible for tracking changes, while another handles conflict resolution.
  • Open-Closed Principle: The SyncFramework is designed to be extensible. You can add new data sources or change the way data is synchronized without modifying the core framework.
  • Liskov Substitution Principle: The SyncFramework uses base classes and interfaces that allow for substitutable components. This means you can replace or modify components without affecting the overall functionality.
  • Interface Segregation Principle: The SyncFramework provides a range of interfaces, allowing you to choose the ones you need and ignore the ones you don’t.
  • Dependency Inversion Principle: The SyncFramework depends on abstractions, not on concrete classes. This makes it more flexible and adaptable to changes.

 

And that’s a wrap for today! But don’t worry, this is just the beginning. In the upcoming series of articles, we’ll dive deeper into each of these principles. We’ll explore how they’re applied in the source code of the SyncFramework, providing real-world examples to help you understand these concepts better. So, stay tuned for more exciting insights into the world of design patterns! See you in the next article!

 

Related articles

If you want to learn more about data synchronization you can checkout the following blog posts:

  1. Data synchronization in a few words – https://www.jocheojeda.com/2021/10/10/data-synchronization-in-a-few-words/
  2. Parts of a Synchronization Framework – https://www.jocheojeda.com/2021/10/10/parts-of-a-synchronization-framework/
  3. Let’s write a Synchronization Framework in C# – https://www.jocheojeda.com/2021/10/11/lets-write-a-synchronization-framework-in-c/
  4. Synchronization Framework Base Classes – https://www.jocheojeda.com/2021/10/12/synchronization-framework-base-classes/
  5. Planning the first implementation – https://www.jocheojeda.com/2021/10/12/planning-the-first-implementation/
  6. Testing the first implementation – https://youtu.be/l2-yPlExSrg
  7. Adding network support – https://www.jocheojeda.com/2021/10/17/syncframework-adding-network-support/

 

A Beginner’s Guide to System.Security.SecurityRules and SecuritySafeCritical in C#

A Beginner’s Guide to System.Security.SecurityRules and SecuritySafeCritical in C#

 

A Beginner’s Guide to System.Security.SecurityRules and SecuritySafeCritical in C#

Introduction

In the .NET Framework, security is a critical concern. Two attributes, System.Security.SecurityRules and SecuritySafeCritical, play a significant role in enforcing Code Access Security (CAS).

System.Security.SecurityRules

The System.Security.SecurityRules attribute specifies the set of security rules that the common language runtime should enforce for an assembly. It has two levels: Level1 and Level2.

Level1

Level1 uses the .NET Framework version 2.0 transparency rules. Here are the key rules for Level1:

  • Public security-critical types and members are treated as security-safe-critical outside the assembly.
  • Security-critical types and members must perform a link demand for full trust to enforce security-critical behavior when they are accessed by external callers.
  • Level1 rules should be used only for compatibility, such as for .NET Framework 2.0 assemblies.

[assembly: System.Security.SecurityRules(System.Security.SecurityRuleSet.Level1)]
public class MyClass
{
    // Your code here
}

SecuritySafeCritical

The SecuritySafeCritical attribute identifies types or members as security-critical and safely accessible by transparent code. Code marked with SecuritySafeCritical must undergo a rigorous security audit to ensure that it can be used safely in a secure execution environment. It must validate the permissions of callers to determine whether they have authority to access protected resources used by the code.


[System.Security.SecuritySafeCritical]
public void MyMethod()
{
    // Your code here
}

Relationship between System.Security.SecurityRules and SecuritySafeCritical

The System.Security.SecurityRules and SecuritySafeCritical attributes work together to enforce security in .NET Framework. An assembly marked with SecurityRules(SecurityRuleSet.Level1) uses the .NET Framework version 2.0 transparency rules, where public security-critical types and members are treated as security-safe-critical outside the assembly.

The concept of trusted Code

Trusted code refers to code that has been granted certain permissions and is considered safe to execute. It’s a combination of techniques, policies, and procedures for which there is no plausible scenario in which a document retrieved from or reproduced by the system could differ substantially from the document that is originally stored. In other words, trusted code certifies that electronically stored information (ESI) is an authentic copy of the original document or information.

Use Cases and Examples

Consider a scenario where you have a method that performs a critical operation, such as accessing a protected resource. You want to ensure that this method can only be called by trusted code. You can mark this method as SecuritySafeCritical to enforce this.


[System.Security.SecuritySafeCritical]
public void AccessProtectedResource()
{
    // Code to access protected resource
}

In this case, the AccessProtectedResource method can only be called by code that has been granted the necessary permissions. This helps to prevent unauthorized access to the protected resource.

Conclusion

Understanding the System.Security.SecurityRules and SecuritySafeCritical attributes is crucial when developing secure .NET applications. By using these attributes correctly, you can enforce robust security rules and protect your application from potential threats. Always remember, with great power comes great responsibility!

I hope this article helps you understand these concepts better. Happy coding! 😊

 

Finding Out the Invoking Methods in .NET

Finding Out the Invoking Methods in .NET

Finding Out the Invoking Methods in .NET

In .NET, it’s possible to find out the methods that are invoking a specific method. This can be particularly useful when you don’t have the source code available. One way to achieve this is by throwing an exception and examining the call stack. Here’s how you can do it:

Throwing an Exception

First, within the method of interest, you need to throw an exception. Here’s an example:


public void MethodOfInterest()
{
    throw new Exception("MethodOfInterest was called");
}
    

Catching the Exception

Next, you need to catch the exception in a higher level method that calls the method of interest:


public void InvokingMethod()
{
    try
    {
        MethodOfInterest();
    }
    catch (Exception ex)
    {
        Console.WriteLine(ex.StackTrace);
    }
}
    

In the catch block, we print the stack trace of the exception to the console. The stack trace is a string that represents a stack of method calls that leads to the location where the exception was thrown.

Examining the Call Stack

The call stack is a list of all the methods that were in the process of execution at the time the exception was thrown. By examining the call stack, you can see which methods were invoking the method of interest.

Here’s an example of what a call stack might look like:


at Namespace.MethodOfInterest() in C:\Path\To\File.cs:line 10
at Namespace.InvokingMethod() in C:\Path\To\File.cs:line 20
    

In this example, InvokingMethod was the method that invoked MethodOfInterest.

Conclusion

By throwing an exception and examining the call stack, you can find out which methods are invoking a specific method in .NET. This can be a useful debugging tool, especially when you don’t have the source code available.

Introduction to Machine Learning in C#: Spam Detection using Binary Classification

Introduction to Machine Learning in C#: Spam Detection using Binary Classification

Introduction to Machine Learning in C#: Spam using Binary Classification

This example demonstrates the basics of machine learning in C# using ML.NET, Microsoft’s machine learning framework specifically designed for .NET applications. ML.NET offers a versatile, cross-platform framework that simplifies integrating machine learning into .NET applications, making it accessible for developers familiar with the .NET ecosystem.

Technologies Used

  • C#: A modern, object-oriented programming language developed by Microsoft, which is widely used for a variety of applications. In this example, C# is used to define data models, process data, and implement the machine learning pipeline.
  • ML.NET: An open-source and cross-platform machine learning framework for .NET. It is used in this example to create a machine learning model for classifying emails as spam or not spam. ML.NET simplifies the process of training, evaluating, and consuming machine learning models in .NET applications.
  • .NET Core: A cross-platform version of .NET for building applications that run on Windows, Linux, and macOS. It provides the runtime environment for our C# application.

The example focuses on a simple spam detection system. It utilizes text data processing and binary classification, two common tasks in machine learning, to classify emails into spam and non-spam categories. This is achieved through the use of a logistic regression model, a fundamental algorithm for binary classification problems.

Creating an NUnit Test Project in Visual Studio Code

 

           Setting up NUnit for DecisionTreeDemo

 

    • Install .NET Core SDK

      Download and install the .NET Core SDK from the .NET official website.

    • Install Visual Studio Code

      Download and install Visual Studio Code (VS Code) from here. Also, install the C# extension for VS Code by Microsoft.

    • Create a New .NET Core Project

      Open VS Code, and in the terminal, create a new .NET Core project:

      dotnet new console -n DecisionTreeDemo
      cd DecisionTreeDemo
    • Add the ML.NET Package

      Add the ML.NET package to your project:

      dotnet add package Microsoft.ML
    • Create the Test Project

      Create a separate directory for your test project, then initialize a new test project:

          
      mkdir DecisionTreeDemo.Tests
      cd DecisionTreeDemo.Tests
      dotnet new nunit
    • Add Required Packages to Test Project

      Add the necessary NUnit and ML.NET packages:

      dotnet add package NUnit
      dotnet add package Microsoft.NET.Test.Sdk
      dotnet add package NUnit3TestAdapter
      dotnet add package Microsoft.ML
    • Reference the Main Project

      Reference the main project:

          dotnet add reference ../DecisionTreeDemo/DecisionTreeDemo.csproj
    • Write Test Cases

      Write NUnit test cases within your test project to test different functionalities of your ML.NET application.

      Define the Data Model for the Email

      Include the content of the email and whether it’s classified as spam.

          
      public class Email
      {
          [LoadColumn(0)]
          public string Content { get; set; }
      
          [LoadColumn(1), ColumnName("Label")]
          public bool IsSpam { get; set; }
      }
      

      Define the Model for Spam Prediction

      This model is used to determine whether an email is spam.

        
      public class SpamPrediction
      {
          [ColumnName("PredictedLabel")]
          public bool IsSpam { get; set; }
      }
      

      Write the test case

             
      // Create a new ML context for the application, which is a starting point for ML.NET operations.
              var mlContext = new MLContext();
      
              // Example dataset of emails. In a real-world scenario, this would be much larger and possibly loaded from an external source.
              var data = new List
              {
                  new Email { Content = "Buy cheap products now", IsSpam = true },
                  new Email { Content = "Meeting at 3 PM", IsSpam = false },
                  // Additional data can be added here...
              };
      
              // Load the data into the ML.NET data model.
              var trainData = mlContext.Data.LoadFromEnumerable(data);
      
              // Define the data processing pipeline. Here we are featurizing the text (i.e., converting text into numeric features) and then     applying a logistic regression model.
              var pipeline = mlContext.Transforms.Text.FeaturizeText("Features", nameof(Email.Content))
                  .Append(mlContext.BinaryClassification.Trainers.SdcaLogisticRegression());
      
              // Train the model on the loaded data.
              var model = pipeline.Fit(trainData);
      
              // Create a prediction engine for making predictions on individual data samples.
              var predictionEngine = mlContext.Model.CreatePredictionEngine<Email, SpamPrediction>(model);
      
              // Create a sample email to test the model.
              var sampleEmail = new Email { Content = "Special discount, buy now!" };
              var prediction = predictionEngine.Predict(sampleEmail);
      
              // Output the prediction to the console.
              Debug.WriteLine($"Email: '{sampleEmail.Content}' is {(prediction.IsSpam ? "spam" : "not spam")}");
              Assert.IsTrue(prediction.IsSpam);
      
    • Running Tests

      Run the tests with the following command:

      dotnet test

As you can see the test will pass because the sample email contains the word “buy” that was used in the training data and was labeled as spam

You can download the source code for this article here

This article has explored the fundamentals of machine learning in C# using the ML.NET framework. By defining specific data models and utilizing ML.NET’s powerful features, we demonstrated how to build a simple yet effective spam detection system. This example serves as a gateway into the vast world of machine learning, showcasing the potential for integrating AI technologies into .NET applications. The skills and concepts learned here lay the groundwork for further exploration and development in the exciting field of machine learning and artificial intelligence.