SyncFramework for XPO: Updated for .NET 8 & 9  and DevExpress 24.2.3!

SyncFramework for XPO: Updated for .NET 8 & 9 and DevExpress 24.2.3!

SyncFramework for XPO is a specialized implementation of our delta encoding synchronization library, designed specifically for DevExpress XPO users. It enables efficient data synchronization by tracking and transmitting only the changes between data versions, optimizing both bandwidth usage and processing time.

What’s New

  • Base target framework updated to .NET 8.0
  • Added compatibility with .NET 9.0
  • Updated DevExpress XPO dependencies to 24.2.3
  • Continued support for delta encoding synchronization
  • Various performance improvements and bug fixes

Framework Compatibility

  • Primary Target: .NET 8.0
  • Additional Support: .NET 9.0

Our XPO implementation continues to serve the DevExpress community.

Key Features

  • Seamless integration with DevExpress XPO
  • Efficient delta-based synchronization
  • Support for multiple database providers
  • Cross-platform compatibility
  • Easy integration with existing XPO and XAF applications

As always, if you own a license, you can compile the source code yourself from our GitHub repository. The framework maintains its commitment to providing reliable data synchronization for XPO applications.

Happy Delta Encoding! 🚀

 

SyncFramework Update: Now Supporting .NET 9 and EfCore 9!

SyncFramework Update: Now Supporting .NET 9 and EfCore 9!

SyncFramework Update: Now Supporting .NET 9!

SyncFramework is a C# library that simplifies data synchronization using delta encoding technology. Instead of transferring entire datasets, it efficiently synchronizes by tracking and transmitting only the changes between data versions, significantly reducing bandwidth and processing overhead.

What’s New

  • All packages now target .NET 9
  • BIT.Data.Sync packages updated to support the latest framework
  • Entity Framework Core packages upgraded to EF Core 9
  • Various minor fixes and improvements

Available Implementations

  • SyncFramework for XPO: For DevExpress XPO users
  • SyncFramework for Entity Framework Core: For EF Core users

Package Statistics

Our packages have been serving the community well, with steady adoption:

  • BIT.Data.Sync: 2,142 downloads
  • BIT.Data.Sync.AspNetCore: 1,064 downloads
  • BIT.Data.Sync.AspNetCore.Xpo: 521 downloads
  • BIT.Data.Sync.EfCore: 1,691 downloads
  • BIT.Data.Sync.EfCore.Npgsql: 1,120 downloads
  • BIT.Data.Sync.EfCore.Pomelo.MySql: 1,172 downloads
  • BIT.Data.Sync.EfCore.Sqlite: 887 downloads
  • BIT.Data.Sync.EfCore.SqlServer: 982 downloads

Resources

NuGet Packages
Source Code

As always, you can compile the source code yourself from our GitHub repository. The framework continues to provide reliable data synchronization across different platforms and databases.

Happy Delta Encoding! 🚀

ADOMD.NET: Beyond Rows and Columns – The Multidimensional Evolution of ADO.NET

ADOMD.NET: Beyond Rows and Columns – The Multidimensional Evolution of ADO.NET

When I first encountered the challenge of migrating hundreds of Visual Basic 6 reports to .NET, I never imagined it would lead me down a path of discovering specialized data analytics tools. Today, I want to share my experience with ADOMD.NET and how it could have transformed our reporting challenges, even though we couldn’t implement it due to our database constraints.

The Challenge: The Sales Gap Report

The story begins with a seemingly simple report called “Sales Gap.” Its purpose was critical: identify periods when regular customers stopped purchasing specific items. For instance, if a customer typically bought 10 units monthly from January to May, then suddenly stopped in June and July, sales representatives needed to understand why.

This report required complex queries across multiple transactional tables:

  • Invoicing
  • Sales
  • Returns
  • Debits
  • Credits

Initially, the report took about a minute to run. As our data grew, so did the execution time—eventually reaching an unbearable 15 minutes. We were stuck with a requirement to use real-time transactional data, making traditional optimization techniques like data warehousing off-limits.

Enter ADOMD.NET: A Specialized Solution

ADOMD.NET (ActiveX Data Objects Multidimensional .NET) emerged as a potential solution. Here’s why it caught my attention:

Key Features:

  1. Multidimensional Analysis

    Unlike traditional SQL queries, ADOMD.NET uses MDX (Multidimensional Expressions), specifically designed for analytical queries. Here’s a basic example:

    string mdxQuery = @"
        SELECT 
            {[Measures].[Sales Amount]} ON COLUMNS,
            {[Date].[Calendar Year].MEMBERS} ON ROWS
        FROM [Sales Cube]
        WHERE [Product].[Category].[Electronics]";
  2. Performance Optimization

    ADOMD.NET is built for analytical workloads, offering better performance for complex calculations and aggregations. It achieves this through:

    • Specialized data structures for multidimensional analysis
    • Efficient handling of hierarchical data
    • Built-in support for complex calculations
  3. Advanced Analytics Capabilities

    The tool supports sophisticated analysis patterns like:

    string mdxQuery = @"
        WITH MEMBER [Measures].[GrowthVsPreviousYear] AS
            ([Measures].[Sales Amount] - 
            ([Measures].[Sales Amount], [Date].[Calendar Year].PREVMEMBER)
            )/([Measures].[Sales Amount], [Date].[Calendar Year].PREVMEMBER)
        SELECT 
            {[Measures].[Sales Amount], [Measures].[GrowthVsPreviousYear]} 
        ON COLUMNS...";

Lessons Learned

While we couldn’t implement ADOMD.NET due to our use of Pervasive Database instead of SQL Server, the investigation taught me valuable lessons about report optimization:

  1. The importance of choosing the right tools for analytical workloads
  2. The limitations of running complex analytics on transactional databases
  3. The value of specialized query languages for different types of data analysis

Modern Applications

Today, ADOMD.NET continues to be relevant for organizations using:

  • SQL Server Analysis Services (SSAS)
  • Azure Analysis Services
  • Power BI Premium datasets

If I were facing the same challenge today with SQL Server, ADOMD.NET would be my go-to solution for:

  • Complex sales analysis
  • Customer behavior tracking
  • Performance-intensive analytical reports

Conclusion

While our specific situation with Pervasive Database prevented us from using ADOMD.NET, it remains a powerful tool for organizations using Microsoft’s analytics stack. The experience taught me that sometimes the solution isn’t about optimizing existing queries, but about choosing the right specialized tools for analytical workloads.

Remember: Just because you can run analytics on your transactional database doesn’t mean you should. Tools like ADOMD.NET exist for a reason, and understanding when to use them can save countless hours of optimization work and provide better results for your users.

 

The AnyCPU Illusion: Native Dependencies in .NET Applications

The AnyCPU Illusion: Native Dependencies in .NET Applications

Introduction

In the .NET ecosystem, “AnyCPU” is often considered a silver bullet for cross-platform deployment. However, this assumption can lead to significant problems when your application depends on native assemblies. In this post, I want to share a personal story that highlights how I discovered these limitations and how native dependencies affect the true portability of AnyCPU applications, especially for database access through ADO.NET and popular ORMs.

My Journey to Understanding AnyCPU’s Limitations

Every year, around Thanksgiving or Christmas, I visit my friend, brother, and business partner Javier. Two years ago, during one of these visits, I made a decision that would lead me to a pivotal realization about AnyCPU architecture.

At the time, I was tired of traveling with my bulky MSI GE72 Apache Pro-24 gaming laptop. According to MSI’s official specifications, it weighed 5.95 pounds—but that number didn’t include the hefty charger, which brought the total to around 12 pounds. Later, I upgraded to an MSI GF63 Thin, which was lighter at 4.10 pounds—but with the charger, it was still around 7.5 pounds. Lugging these laptops through airports felt like a workout.

Determined to travel lighter, I purchased a MacBook Air with the M2 chip. At just 2.7 pounds, including the charger, the MacBook Air felt like a breath of fresh air. The Apple Silicon chip was incredibly fast, and I immediately fell in love with the machine.

Having used a MacBook Pro with Bootcamp and Windows 7 years ago, I thought I could recreate that experience by running a Windows virtual machine on my MacBook Air to check projects and do some light development while traveling.

The Virtualization Experiment

As someone who loves virtualization, I eagerly set up a Windows virtual machine on my MacBook Air. I grabbed my trusty Windows x64 ISO, set up the virtual machine, and attempted to boot it—but it failed. I quickly realized the issue was related to CPU architecture. My x64 ISO wasn’t compatible with the ARM-based M2 chip.

Undeterred, I downloaded a Windows 11 ISO for ARM architecture and created the VM. Success! Windows was up and running, and I installed Visual Studio along with my essential development tools, including DevExpress XPO (my favorite ORM).

The Demo Disaster

The real test came during a trip to Dubai, where I was scheduled to give a live demo showcasing how quickly you can develop Line-of-Business (LOB) apps with XAF. Everything started smoothly until I tried to connect my XAF app to the database. Despite my best efforts, the connection failed.

In the middle of the demo, I switched to an in-memory data provider to salvage the presentation. After the demo, I dug into the issue and realized the root cause was related to the CPU architecture. The native database drivers I was using weren’t compatible with the ARM architecture.

A Familiar Problem

This situation reminded me of the transition from x86 to x64 years ago. Back then, I encountered similar issues where native drivers wouldn’t load unless they matched the process architecture.

The Native Dependency Challenge

Platform-Specific Loading Requirements

Native DLLs must exactly match the CPU architecture of your application:

  • If your app runs as x86, it can only load x86 native DLLs.
  • If running as x64, it requires x64 native DLLs.
  • ARM requires ARM-specific binaries.
  • ARM64 requires ARM64-specific binaries.

There is no flexibility—attempting to load a DLL compiled for a different architecture results in an immediate failure.

How Native Libraries are Loaded

When your application loads a native DLL, the operating system follows a specific search pattern:

  1. The application’s directory
  2. System directories (System32/SysWOW64)
  3. Directories listed in the PATH environment variable

Crucially, these native libraries must match the exact architecture of the running process.

// This seemingly simple code
[DllImport("native.dll")]
static extern void NativeMethod();

// Actually requires:
// - native.dll compiled for x86 when running as 32-bit
// - native.dll compiled for x64 when running as 64-bit
// - native.dll compiled for ARM64 when running on ARM64

The SQL Server Example

Let’s look at SQL Server connectivity, a common scenario where the AnyCPU illusion breaks down:

// Traditional ADO.NET connection
using (var connection = new SqlConnection(connectionString))
{
    // This requires SQL Native Client
    // Which must match the process architecture
    await connection.OpenAsync();
}

Even though your application is compiled as AnyCPU, the SQL Native Client must match the process architecture. This becomes particularly problematic on newer architectures like ARM64, where native drivers may not be available.

Impact on ORMs

Entity Framework Core

Entity Framework Core, despite its modern design, still relies on database providers that may have native dependencies:

public class MyDbContext : DbContext
{
    protected override void OnConfiguring(DbContextOptionsBuilder optionsBuilder)
    {
        // This configuration depends on:
        // 1. SQL Native Client
        // 2. Microsoft.Data.SqlClient native components
        optionsBuilder.UseSqlServer(connectionString);
    }
}

DevExpress XPO

DevExpress XPO faces similar challenges:

// XPO configuration
string connectionString = MSSqlConnectionProvider.GetConnectionString("server", "database");
XpoDefault.DataLayer = XpoDefault.GetDataLayer(connectionString, AutoCreateOption.DatabaseAndSchema);

// The MSSqlConnectionProvider relies on the same native SQL Server components

Solutions and Best Practices

1. Architecture-Specific Deployment

Instead of relying on AnyCPU, consider creating architecture-specific builds:

<PropertyGroup>
    <Platforms>x86;x64;arm64</Platforms>
    <RuntimeIdentifiers>win-x86;win-x64;win-arm64</RuntimeIdentifiers>
</PropertyGroup>

2. Runtime Provider Selection

Implement smart provider selection based on the current architecture:

public static class DatabaseProviderFactory
{
    public static IDbConnection GetProvider()
    {
        return RuntimeInformation.ProcessArchitecture switch
        {
            Architecture.X86 => new SqlConnection(), // x86 native provider
            Architecture.X64 => new SqlConnection(), // x64 native provider
            Architecture.Arm64 => new Microsoft.Data.SqlClient.SqlConnection(), // ARM64 support
            _ => throw new PlatformNotSupportedException()
        };
    }
}

3. Managed Fallbacks

Implement fallback strategies when native providers aren’t available:

public class DatabaseConnection
{
    public async Task<IDbConnection> CreateConnectionAsync()
    {
        try
        {
            var connection = new SqlConnection(_connectionString);
            await connection.OpenAsync();
            return connection;
        }
        catch (DllNotFoundException)
        {
            var managedConnection = new Microsoft.Data.SqlClient.SqlConnection(_connectionString);
            await managedConnection.OpenAsync();
            return managedConnection;
        }
    }
}

4. Deployment Considerations

  • Include all necessary native dependencies for each target architecture.
  • Use architecture-specific directories in your deployment.
  • Consider self-contained deployment to include the correct runtime.

Real-World Implications

This experience taught me that while AnyCPU provides excellent flexibility for managed code, it has limitations when dealing with native dependencies. These limitations become more apparent in scenarios like cloud deployments, ARM64 devices, and live demos.

Conclusion

The transition to ARM architecture is accelerating, and understanding the nuances of AnyCPU and native dependencies is more important than ever. By planning for architecture-specific deployments and implementing fallback strategies, you can build more resilient applications that can thrive in a multi-architecture world.

User-Defined Functions in SQLite: Enhancing SQL with Custom C# Procedures

User-Defined Functions in SQLite: Enhancing SQL with Custom C# Procedures

SQLite, known for its simplicity and lightweight architecture, offers unique opportunities for developers to integrate custom functions directly into their applications. Unlike most databases that require learning an SQL dialect for procedural programming, SQLite operates in-process with your application. This design choice allows developers to define functions using their application’s programming language, enhancing the database’s flexibility and functionality.

Scalar Functions

Scalar functions in SQLite are designed to return a single value for each row in a query. Developers can define new scalar functions or override built-in ones using the CreateFunction method. This method supports various data types for parameters and return types, ensuring versatility in function creation. Developers can specify the state argument to pass a consistent value across all function invocations, avoiding the need for closures. Additionally, marking a function as isDeterministic optimizes query compilation by SQLite if the function’s output is predictable based on its input.

Example: Adding a Scalar Function


connection.CreateFunction(
    "volume",
    (double radius, double height) => Math.PI * Math.Pow(radius, 2) * height);

var command = connection.CreateCommand();
command.CommandText = @"
    SELECT name,
           volume(radius, height) AS volume
    FROM cylinder
    ORDER BY volume DESC
";
        

Operators

SQLite implements several operators as scalar functions. Defining these functions in your application overrides the default behavior of these operators. For example, functions like glob, like, and regexp can be custom-defined to change the behavior of their corresponding operators in SQL queries.

Example: Defining the regexp Function


connection.CreateFunction(
    "regexp",
    (string pattern, string input) => Regex.IsMatch(input, pattern));

var command = connection.CreateCommand();
command.CommandText = @"
    SELECT count()
    FROM user
    WHERE bio REGEXP '\w\. {2,}\w'
";
var count = command.ExecuteScalar();
        

Aggregate Functions

Aggregate functions return a consolidated value from multiple rows. Using CreateAggregate, developers can define and override these functions. The seed argument sets the initial context state, and the func argument is executed for each row. The resultSelector parameter, if specified, calculates the final result from the context after processing all rows.

Example: Creating an Aggregate Function for Standard Deviation


connection.CreateAggregate(
    "stdev",
    (Count: 0, Sum: 0.0, SumOfSquares: 0.0),
    ((int Count, double Sum, double SumOfSquares) context, double value) => {
        context.Count++;
        context.Sum += value;
        context.SumOfSquares += value * value;
        return context;
    },
    context => {
        var variance = context.SumOfSquares - context.Sum * context.Sum / context.Count;
        return Math.Sqrt(variance / context.Count);
    });

var command = connection.CreateCommand
();
command.CommandText = @"
SELECT stdev(gpa)
FROM student
";
var stdDev = command.ExecuteScalar();

Errors

When a user-defined function throws an exception in SQLite, the message is returned to the database engine, which then raises an error. Developers can customize the SQLite error code by throwing a SqliteException with a specific SqliteErrorCode.

Debugging

SQLite directly invokes the implementation of user-defined functions, allowing developers to insert breakpoints and leverage the full .NET debugging experience. This integration facilitates debugging and enhances the development of robust, error-free custom functions.

This article illustrates the power and flexibility of SQLite’s approach to user-defined functions, demonstrating how developers can extend the functionality of SQL with the programming language of their application, thereby streamlining the development process and enhancing database interaction.

Github Repo

What is an O.R.M (Object-Relational Mapping)

What is an O.R.M (Object-Relational Mapping)

ORM is a technique for converting data between incompatible type systems using object-oriented programming languages. In the context of database management, an ORM provides a way to interact with a database using objects and methods, rather than writing raw SQL queries. This allows for a higher level of abstraction, reducing the amount of repetitive and error-prone code that needs to be written, and making it easier to maintain the application.

ORM and RDBMS

Object-Relational Mapping (ORM) tools and RDBMS have different approaches to querying data.

RDBMS uses SQL which is a low-level, language-agnostic query language that is used to communicate with relational databases. When using SQL, developers write raw SQL statements that are executed directly against the database. The results of the query are then returned in the form of a table or set of tables, which must be manually mapped to objects in code.

On the other hand, an ORM tool abstracts the underlying SQL code, allowing developers to query data using an object-oriented syntax that is similar to the programming language used in the application.

The ORM tool automatically generates the necessary SQL statements and executes them against the database, and then maps the resulting data to objects in code.

One of the main advantages of using an ORM tool is that it eliminates the need for manual data mapping, making it easier for developers to write data-access code and reducing the chances of errors. Additionally, because ORM tools use a higher-level syntax, they can also be more intuitive and easier to use than raw SQL.

However, ORM tools can also introduce performance overhead and complexity, as they must manage the mapping between the database and objects in code. In some cases, raw SQL may still be necessary to achieve the desired performance or to perform complex data manipulation that is not supported by the ORM tool.

As a simple example we will use the database table “customer”. It has four columns:

  • id: A unique identifier for each customer, usually an auto-incrementing integer.
  • name: The name of the customer, represented as a string.
  • email: The email address of the customer, represented as a string.
  • address: The address of the customer, represented as a string.

Each row in the table represents a single customer, and each column represents a piece of information about that customer. The table is used to store the data for the customer objects created in the code. The ORM maps the table to a class, allowing you to interact with the data using objects and methods.

Here is an example of how an ORM can map a database table named “customer” to a class representation in C#:

class Customer
{
    public int Id { get; set; }
    public string Name { get; set; }
    public string Email { get; set; }
    public string Address { get; set; }
}

// The ORM maps the customer table in the database to the Customer class.
// The columns of the table become properties of the class, and each row in the table becomes an instance of the class.

In this example, the ORM abstracts away the underlying database and provides a high-level, object-oriented interface for working with the data.

The main advantage of using an ORM (Object-Relational Mapping) instead of using the direct results from the database in C# is increased abstraction and convenience. This provides several benefits:

  1. Improved readability and maintainability: Code written using an ORM is easier to understand and maintain, as it uses a higher-level, object-oriented interface, rather than raw SQL.
  2. Reduced code duplication: With an ORM, you can encapsulate database-related logic in reusable classes and methods, reducing the amount of repetitive code that needs to be written.
  3. Increased safety: An ORM can help prevent SQL injection attacks by automatically escaping user input, ensuring that data is safely inserted into the database.
  4. Increased flexibility: An ORM provides a layer of abstraction between the application code and the database, allowing you to switch between different database systems without changing the application code.

 

Direct database access

 

var connection = new SqlConnection("connection string");
connection. Open();
var command = new SqlCommand("SELECT * FROM customers", connection);
var reader = command.ExecuteReader();
List < Customer > customers = new List < Customer > ();
while (reader. Read()) {
  var customer = new Customer {
    Id = reader.GetInt32(0),
      Name = reader.GetString(1),
      Email = reader.GetString(2),
      Address = reader.GetString(3)
  };
  customers. Add(customer);
}
reader.Close();
connection. Close();

 

ORM access

 

var customers = ORM.Query<Customer>().ToList();

As you can see, using an ORM simplifies the code and makes it more readable, while also providing additional safety and flexibility benefits.

Different types of O.R.M 

 

Micro O.R.M

 

A micro ORM is a type of Object-Relational Mapping (ORM) library that is designed to be lightweight and simple, offering a minimalistic and easy-to-use interface for interacting with a relational database.

Micro ORMs are typically faster and require less configuration and setup than full-featured ORMs, but also have fewer features and less advanced capabilities.

Micro ORMs provide a higher-level, object-oriented interface for working with databases, allowing you to interact with the data using classes, properties, and methods, rather than writing raw SQL queries as well but they typically offer a smaller, more focused set of features than full-featured ORMs, including support for basic CRUD (Create, Read, Update, Delete) operations, and often provide basic querying and data mapping capabilities.

Micro ORMs are often used in small projects or in environments where performance is critical. They are also commonly used in microservices and other architectures that prioritize simplicity, scalability, and ease of deployment.

 

Full-fledged Object-Relational Mapping (ORM)

 

A full-fledged Object-Relational Mapping (ORM) is a software library that provides a convenient and efficient way of working with relational databases.

A full-fledged ORM typically has the following characteristics:

  1. Object Mapping: The ORM maps database tables to object-oriented classes, allowing developers to interact with the database using objects instead of tables.
  2. Query Generation: The ORM generates SQL statements based on the object-oriented query syntax, reducing the need for developers to write and manage raw SQL.
  3. Data Validation: The ORM provides built-in data validation and type checking, reducing the risk of incorrect data being written to the database.
  4. Caching: The ORM may provide caching of data to improve performance, reducing the number of databases round trips.
  5. Transactions: The ORM supports transactions, making it easier to ensure that database operations are atomic and consistent.
  6. Database Independence: The ORM is typically designed to be database agnostic, allowing developers to switch between different database platforms without changing the application code.
  7. Lazy Loading: The ORM supports lazy loading, which can improve performance by loading related objects only when they are needed.
  8. Performance: A comprehensive ORM is crafted to deliver satisfactory performance, and we can customize each aspect of the ORM to enhance its performance. However, it is important to note that a solid understanding of how the ORM operates is necessary to make these customizations effective.

One of the disadvantages of using an Object-Relational Mapping (ORM) tool is that you may not be able to create specialized, fine-tuned queries with the same level of precision as you can with raw SQL. This is because ORMs typically abstract the underlying database and expose a higher-level, more object-oriented API for querying data.

For example, it may not be possible to use specific SQL constructs, such as subqueries or window functions, in your ORM queries. Additionally, some ORMs may not provide a way to optimize query performance, such as by specifying index hints or controlling the execution plan.

As a result, developers who need to write specialized or fine-tuned queries may find it easier and more efficient to write raw SQL statements. However, this comes at the cost of having to manually manage the mapping between the database and the application, which is one of the main benefits of using an ORM in the first place.

 

ORM and Querying data

 

Before the existence of LINQ, Object-Relational Mapping (ORM) tools used a proprietary query language to query data. This meant that each ORM had to invent its own criteria or query language, and developers had to learn the specific syntax of each ORM in order to write queries.

For example, in XPO, developers used the Criteria API to write queries. The Criteria API allowed developers to write queries in a type-safe and intuitive manner, but the syntax was specific to XPO and not easily transferable to other ORMs.

Similarly, in NHibernate, developers used the Hibernate Query Language (HQL) to write queries. Like the Criteria API in XPO, HQL was a proprietary query language specific to NHibernate, and developers had to learn the syntax in order to write queries.

This lack of a standardized query language was one of the main limitations of ORMs before the introduction of LINQ. LINQ provided a common, language-agnostic query language that could be used with a variety of data sources, including relational databases. This made it easier for developers to write data-access code, as they only had to learn one query language, regardless of the ORM or data source being used.

 

The introduction of LINQ, a new era for O.R.M in dot net

 

Language Integrated Query (LINQ) is a Microsoft .NET Framework component that was introduced in .NET Framework 3.5. It is a set of language and library features that enables developers to write querying and manipulation operations over data in a more intuitive and readable way.

LINQ was created to address the need for a more unified and flexible approach to querying and manipulating data in .NET applications. Prior to LINQ, developers had to write separate code to access and manipulate data in different data sources, such as databases, XML documents, and in-memory collections. This resulted in a fragmented and difficult-to-maintain codebase, with different syntax and approaches for different data sources.

With LINQ, developers can now write a single querying language that works with a wide range of data sources, including databases, XML, in-memory collections, and more.

In addition, LINQ also provides a number of other benefits, such as improved performance, better type safety, and integrated support for data processing operations, such as filtering, grouping, and aggregating.

 

using (var context = new DbContext())
{
   var selectedCustomers = context.Customers
                                   .Where(c => c.Address.Contains("San Salvador"))
                                   .Select(c => new 
                                   { 
                                       c.Id, 
                                       c.Name, 
                                       c.Email, 
                                       c.Address 
                                   })
                                   .ToList();

    foreach(var customer in selectedCustomers)

    foreach(var customer in selectedCustomers)
    {
        Console.WriteLine($"Id: {customer.Id}, Name: {customer.Name}, Email: {customer.Email}, Address: {customer. Address}");
    }
}

 

And that’s all for this post, until next time ))

We are excited to announce that we are currently in the process of writing a comprehensive book about DevExpress XPO. As we work on this project, we believe it is essential to involve our readers and gather their valuable feedback. Therefore, we have decided to share articles from the book as we complete them, giving you an opportunity to provide input and suggestions that we can consider for inclusion in the final release. Keep in mind that the content presented is subject to change. We greatly appreciate your participation in this collaborative effort.

Related Articles

ADO The origin of data access in .NET

Relational database systems: the holy grail of data