by Joche Ojeda | Oct 8, 2024 | A.I, Blazor, Semantic Kernel
Are you excited to bring powerful AI chat completions to your web application? I sure am! In this post, we’ll walk through how to integrate the DevExpress Chat component with the Semantic Kernel using OpenAI. This combination can make your app more interactive and intelligent, and it’s surprisingly simple to set up. Let’s dive in!
Step 1: Adding NuGet Packages
First, let’s ensure we have all the necessary packages. Open your DevExpress.AI.Samples.Blazor.csproj
file and add the following NuGet references:
<ItemGroup>
<PackageReference Include="Microsoft.KernelMemory.Abstractions" Version="0.78.241007.1" />
<PackageReference Include="Microsoft.KernelMemory.Core" Version="0.78.241007.1" />
<PackageReference Include="Microsoft.SemanticKernel" Version="1.21.1" />
</ItemGroup>
This will bring in the core components of Semantic Kernel to power your chat completions.
Step 2: Setting Up Your Kernel in Program.cs
Next, we’ll configure the Semantic Kernel and OpenAI integration. Add the following code in your Program.cs
to create the kernel and set up the chat completion service:
//Create your OpenAI client
string OpenAiKey = Environment.GetEnvironmentVariable("OpenAiTestKey");
var client = new OpenAIClient(new System.ClientModel.ApiKeyCredential(OpenAiKey));
//Adding semantic kernel
var KernelBuilder = Kernel.CreateBuilder();
KernelBuilder.AddOpenAIChatCompletion("gpt-4o", client);
var sk = KernelBuilder.Build();
var ChatService = sk.GetRequiredService<IChatCompletionService>();
builder.Services.AddSingleton<IChatCompletionService>(ChatService);
This step is crucial because it connects your app to OpenAI via the Semantic Kernel and sets up the chat completion service that will drive the AI responses in your chat.
Step 3: Creating the Chat Component
Now that we’ve got our services ready, it’s time to set up the chat component. We’ll define the chat interface in our Razor page. Here’s how you can do that:
Razor Section:
@page "/sk"
@using DevExpress.AIIntegration.Blazor.Chat
@using AIIntegration.Services.Chat;
@using Microsoft.SemanticKernel.ChatCompletion
@using System.Diagnostics
@using System.Text.Json
@using System.Text
@inject IChatCompletionService chatCompletionsService;
@inject IJSRuntime JSRuntime;
This UI will render a clean chat interface using DevExpress’s DxAIChat
component, which is connected to our Semantic Kernel chat completion service.
Code Section:
Now, let’s handle the interaction logic. Here’s the code that powers the chat backend:
@code {
ChatHistory ChatHistory = new ChatHistory();
async Task MessageSent(MessageSentEventArgs args)
{
// Add the user's message to the chat history
ChatHistory.AddUserMessage(args.Content);
// Get a response from the chat completion service
var Result = await chatCompletionsService.GetChatMessageContentAsync(ChatHistory);
// Extract the response content
string MessageContent = Result.InnerContent.ToString();
Debug.WriteLine("Message from chat completion service:" + MessageContent);
// Add the assistant's message to the history
ChatHistory.AddAssistantMessage(MessageContent);
// Send the response to the UI
var message = new Message(MessageRole.Assistant, MessageContent);
args.SendMessage(message);
}
}
With this in place, every time the user sends a message, the chat completion service will process the conversation history and generate a response from OpenAI. The result is then displayed in the chat window.
Step 4: Run Your Project
Before running the project, ensure that the correct environment variable for the OpenAI key is set (OpenAiTestKey
). This key is necessary for the integration to communicate with OpenAI’s API.
Now, you’re ready to test! Simply run your project and navigate to https://localhost:58108/sk
. Voilà! You’ll see a beautiful, AI-powered chat interface waiting for your input. 🎉
Conclusion
And that’s it! You’ve successfully integrated the DevExpress Chat component with the Semantic Kernel for AI-powered chat completions. Now, you can take your user interaction to the next level with intelligent, context-aware responses. The possibilities are endless with this integration—whether you’re building a customer support chatbot, a productivity assistant, or something entirely new.
Let me know how your integration goes, and feel free to share what cool things you build with this!
here is the full implementation GitHub
by Joche Ojeda | Sep 4, 2024 | A.I, Semantic Kernel, XPO
In today’s AI-driven world, the ability to quickly and efficiently store, retrieve, and manage data is crucial for developing sophisticated applications. One tool that helps facilitate this is the Semantic Kernel, a lightweight, open-source development kit designed for integrating AI models into C#, Python, or Java applications. It enables rapid enterprise-grade solutions by serving as an effective middleware.
One of the key concepts in Semantic Kernel is memory—a collection of records, each containing a timestamp, metadata, embeddings, and a key. These memory records can be stored in various ways, depending on how you implement the interfaces. This flexibility allows you to define the storage mechanism, which means you can choose any database solution that suits your needs.
In this blog post, we’ll walk through how to use the IMemoryStore interface in Semantic Kernel and implement a custom memory store using DevExpress XPO, an ORM (Object-Relational Mapping) tool that can interact with over 14 database engines with a single codebase.
Why Use DevExpress XPO ORM?
DevExpress XPO is a powerful, free-to-use ORM created by DevExpress that abstracts the complexities of database interactions. It supports a wide range of database engines such as SQL Server, MySQL, SQLite, Oracle, and many others, allowing you to write database-independent code. This is particularly helpful when dealing with a distributed or multi-environment system where different databases might be used.
By using XPO, we can seamlessly create, update, and manage memory records in various databases, making our application more flexible and scalable.
Implementing a Custom Memory Store with DevExpress XPO
To integrate XPO with Semantic Kernel’s memory management, we’ll implement a custom memory store by defining a database entry class and a database interaction class. Then, we’ll complete the process by implementing the IMemoryStore interface.
Step 1: Define a Database Entry Class
Our first step is to create a class that represents the memory record. In this case, we’ll define an XpoDatabaseEntry
class that maps to a database table where memory records are stored.
public class XpoDatabaseEntry : XPLiteObject {
private string _oid;
private string _collection;
private string _timestamp;
private string _embeddingString;
private string _metadataString;
private string _key;
[Key(false)]
public string Oid { get; set; }
public string Key { get; set; }
public string MetadataString { get; set; }
public string EmbeddingString { get; set; }
public string Timestamp { get; set; }
public string Collection { get; set; }
protected override void OnSaving() {
if (this.Session.IsNewObject(this)) {
this.Oid = Guid.NewGuid().ToString();
}
base.OnSaving();
}
}
This class extends XPLiteObject
from the XPO library, which provides methods to manage the record lifecycle within the database.
Step 2: Create a Database Interaction Class
Next, we’ll define an XpoDatabase
class to abstract the interaction with the data store. This class provides methods for creating tables, inserting, updating, and querying records.
internal sealed class XpoDatabase {
public Task CreateTableAsync(IDataLayer conn) {
using (Session session = new(conn)) {
session.UpdateSchema(new[] { typeof(XpoDatabaseEntry).Assembly });
session.CreateObjectTypeRecords(new[] { typeof(XpoDatabaseEntry).Assembly });
}
return Task.CompletedTask;
}
// Other database operations such as CreateCollectionAsync, InsertOrIgnoreAsync, etc.
}
This class acts as a bridge between Semantic Kernel and the database, allowing us to manage memory entries without having to write complex SQL queries.
Step 3: Implement the IMemoryStore Interface
Finally, we implement the IMemoryStore
interface, which is responsible for defining how the memory store behaves. This includes methods like UpsertAsync
, GetAsync
, and DeleteCollectionAsync
.
public class XpoMemoryStore : IMemoryStore, IDisposable {
public static async Task ConnectAsync(string connectionString) {
var memoryStore = new XpoMemoryStore(connectionString);
await memoryStore._dbConnector.CreateTableAsync(memoryStore._dataLayer).ConfigureAwait(false);
return memoryStore;
}
public async Task CreateCollectionAsync(string collectionName) {
await this._dbConnector.CreateCollectionAsync(this._dataLayer, collectionName).ConfigureAwait(false);
}
// Other methods for interacting with memory records
}
The XpoMemoryStore
class takes advantage of XPO’s ORM features, making it easy to create collections, store and retrieve memory records, and perform batch operations. Since Semantic Kernel doesn’t care where memory records are stored as long as the interfaces are correctly implemented, you can now store your memory records in any of the databases supported by XPO.
Advantages of Using XPO with Semantic Kernel
- Database Independence: You can switch between multiple databases without changing your codebase.
- Scalability: XPO’s ability to manage complex relationships and large datasets makes it ideal for enterprise-grade solutions.
- ORM Abstraction: With XPO, you avoid writing SQL queries and focus on high-level operations like creating and updating objects.
Conclusion
In this blog post, we’ve demonstrated how to integrate DevExpress XPO ORM with the Semantic Kernel using the IMemoryStore
interface. This approach allows you to store AI-driven memory records in a wide variety of databases while maintaining a flexible, scalable architecture.
In future posts, we’ll explore specific use cases and how you can leverage this memory store in real-world applications. For the complete implementation, you can check out my GitHub fork.
Stay tuned for more insights and examples!
by Joche Ojeda | Jun 6, 2024 | Data Synchronization
Choosing the Right JSON Serializer for SyncFramework: DataContractJsonSerializer vs Newtonsoft.Json vs System.Text.Json
When building robust and efficient synchronization solutions with SyncFramework, selecting the appropriate JSON serializer is crucial. Serialization directly impacts performance, data integrity, and compatibility, making it essential to understand the differences between the available options: DataContractJsonSerializer, Newtonsoft.Json, and System.Text.Json. This post will guide you through the decision-making process, highlighting key considerations and the implications of using each serializer within the SyncFramework context.
Understanding SyncFramework and Serialization
SyncFramework is designed to synchronize data across various data stores, devices, and applications. Efficient serialization ensures that data is accurately and quickly transmitted between these components. The choice of serializer affects not only performance but also the complexity of the implementation and maintenance of your synchronization logic.
DataContractJsonSerializer
DataContractJsonSerializer is tightly coupled with the DataContract and DataMember attributes, making it a reliable choice for scenarios that require explicit control over serialization:
- Strict Type Adherence: By enforcing strict adherence to data contracts, DataContractJsonSerializer ensures that serialized data conforms to predefined types. This is particularly important in SyncFramework when dealing with complex and strongly-typed data models.
- Data Integrity: The explicit nature of DataContract and DataMember attributes guarantees that only the intended data members are serialized, reducing the risk of data inconsistencies during synchronization.
- Compatibility with WCF: If SyncFramework is used in conjunction with WCF services, DataContractJsonSerializer provides seamless integration, ensuring consistent serialization across services.
When to Use: Opt for DataContractJsonSerializer when working with strongly-typed data models and when strict type fidelity is essential for your synchronization logic.
Newtonsoft.Json (Json.NET)
Newtonsoft.Json is known for its flexibility and ease of use, making it a popular choice for many .NET applications:
- Ease of Integration: Newtonsoft.Json requires no special attributes on classes, allowing for quick integration with existing codebases. This flexibility can significantly speed up development within SyncFramework.
- Advanced Customization: It offers extensive customization options through attributes like JsonProperty and JsonIgnore, and supports complex scenarios with custom converters. This makes it easier to handle diverse data structures and serialization requirements.
- Wide Adoption: As a widely-used library, Newtonsoft.Json benefits from a large community and comprehensive documentation, providing valuable resources during implementation.
When to Use: Choose Newtonsoft.Json for its flexibility and ease of use, especially when working with existing codebases or when advanced customization of the serialization process is required.
System.Text.Json
System.Text.Json is a high-performance JSON serializer introduced in .NET Core 3.0 and .NET 5, providing a modern and efficient alternative:
- High Performance: System.Text.Json is optimized for performance, making it suitable for high-throughput synchronization scenarios in SyncFramework. Its minimal overhead and efficient memory usage can significantly improve synchronization speed.
- Integration with ASP.NET Core: As the default JSON serializer for ASP.NET Core, System.Text.Json offers seamless integration with modern .NET applications, ensuring consistency and reducing setup time.
- Attribute-Based Customization: While offering fewer customization options compared to Newtonsoft.Json, it still supports essential attributes like JsonPropertyName and JsonIgnore, providing a balance between performance and configurability.
When to Use: System.Text.Json is ideal for new applications targeting .NET Core or .NET 5+, where performance is a critical concern and advanced customization is not a primary requirement.
Handling DataContract Requirements
In SyncFramework, certain types may require specific serialization behaviors dictated by DataContract notations. When using Newtonsoft.Json or System.Text.Json, additional steps are necessary to accommodate these requirements:
- Newtonsoft.Json: Use attributes such as JsonProperty to map JSON properties to DataContract members. Custom converters may be needed for complex serialization scenarios.
- System.Text.Json: Employ JsonPropertyName and custom converters to align with DataContract requirements. While less flexible than Newtonsoft.Json, it can still handle most common scenarios with appropriate configuration.
Conclusion
Choosing the right JSON serializer for SyncFramework depends on the specific needs of your synchronization logic. DataContractJsonSerializer is suited for scenarios demanding strict type fidelity and integration with WCF services. Newtonsoft.Json offers unparalleled flexibility and ease of integration, making it ideal for diverse and complex data structures. System.Text.Json provides a high-performance, modern alternative for new .NET applications, balancing performance with essential customization.
By understanding the strengths and limitations of each serializer, you can make an informed decision that ensures efficient, reliable, and maintainable synchronization in your SyncFramework implementation.
by Joche Ojeda | Jun 4, 2024 | Data Synchronization
In the world of software development, exception handling is a critical aspect that can significantly impact the user experience and the robustness of the application. When it comes to client-server architectures, such as the SyncFramework, the way exceptions are handled can make a big difference. This blog post will explore two common patterns for handling exceptions in a C# client-server API and provide recommendations on how clients should handle exceptions.
Throwing Exceptions in the API
The first pattern involves throwing exceptions directly in the API. When an error occurs in the API, an exception is thrown. This approach provides detailed information about what went wrong, which can be incredibly useful for debugging. However, it also means that the client needs to be prepared to catch and handle these exceptions.
public void SomeApiMethod()
{
// Some code...
if (someErrorCondition)
{
throw new SomeException("Something went wrong");
}
// More code...
}
Returning HTTP Error Codes
The second pattern involves returning HTTP status codes to indicate the result of the operation. For example, a `200` status code means the operation was successful, a `400` series status code means there was a client error, and a `500` series status code means there was a server error. This approach provides a standard way for the client to check the result of the operation without having to catch exceptions. However, it may not provide as much detailed information about what went wrong.
[HttpGet]
public IActionResult Get()
{
try
{
// Code that could throw an exception
}
catch (SomeException ex)
{
return StatusCode(500, $"Internal server error: {ex}");
}
}
Best Practices
In general, a good practice is to handle exceptions on the server side and return appropriate HTTP status codes and error messages in the response. This way, the client only needs to interpret the HTTP status code and the error message, if any, and doesn’t need to know how to handle specific exceptions that are thrown by the server. This makes the client code simpler and less coupled to the server.
Remember, it’s important to avoid exposing sensitive information in error messages. The error messages should be helpful for the client to understand what went wrong, but they shouldn’t reveal any sensitive information or details about the internal workings of the server.
Conclusion
Exception handling is a crucial aspect of any application, and it’s especially important in a client-server architecture like the SyncFramework. By handling exceptions on the server side and returning meaningful HTTP status codes and error messages, you can create a robust and user-friendly application. Happy coding!
by Joche Ojeda | Apr 28, 2024 | A.I
Introduction to Semantic Kernel
Hey there, fellow curious minds! Let’s talk about something exciting today—Semantic Kernel. But don’t worry, we’ll keep it as approachable as your favorite coffee shop chat.
What Exactly Is Semantic Kernel?
Imagine you’re in a magical workshop, surrounded by tools. Well, Semantic Kernel is like that workshop, but for developers. It’s an open-source Software Development Kit (SDK) that lets you create AI agents. These agents aren’t secret spies; they’re little programs that can answer questions, perform tasks, and generally make your digital life easier.
Here’s the lowdown:
- Open-Source: Think of it as a community project. People from all walks of tech life contribute to it, making it better and more powerful.
- Software Development Kit (SDK): Fancy term, right? But all it means is that it’s a set of tools for building software. Imagine it as your AI Lego set.
- Agents: Nope, not James Bond. These are like your personal AI sidekicks. They’re here to assist you, not save the world (although that would be cool).
A Quick History Lesson
About a year ago, Semantic Kernel stepped onto the stage. Since then, it’s been striding confidently, like a seasoned performer. Here are some backstage highlights:
- GitHub Stardom: On March 17th, 2023, it made its grand entrance on GitHub. And guess what? It got more than 17,000 stars! (Around 18.2. right now) That’s like being the coolest kid in the coding playground.
- Downloads Galore: The C# kernel (don’t worry, we’ll explain what that is) had 1000000+ NuGet downloads. It’s like everyone wanted a piece of the action.
- VS Code Extension: Over 25,000 downloads! Imagine it as a magical wand for your code editor.
And hey, the .Net kernel even threw a party—it reached a 1.0 release! The Python and Java kernels are close behind with their 1.0 Release Candidates. It’s like they’re all graduating from AI university.
Why Should You Care?
Now, here’s the fun part. Why should you, someone with a lifetime of wisdom and curiosity, care about this?
- Microsoft Magic: Semantic Kernel loves hanging out with Microsoft products. It’s like they’re best buddies. So, when you use it, you get to tap into the power of Microsoft’s tech universe. Fancy, right? Learn more
- No Code Rewrite Drama: Imagine you have a favorite recipe (let’s say it’s your grandma’s chocolate chip cookies). Now, imagine you want to share it with everyone. Semantic Kernel lets you do that without rewriting the whole recipe. You just add a sprinkle of AI magic! Check it out
- LangChain vs. Semantic Kernel: These two are like rival chefs. Both want to cook up AI goodness. But while LangChain (built around Python and JavaScript) comes with a full spice rack of tools, Semantic Kernel is more like a secret ingredient. It’s lightweight and includes not just Python but also C#. Plus, it’s like the Assistant API—no need to fuss over memory and context windows. Just cook and serve!
So, my fabulous friend, whether you’re a seasoned developer or just dipping your toes into the AI pool, Semantic Kernel has your back. It’s like having a friendly AI mentor who whispers, “You got this!” And with its growing community and constant updates, Semantic Kernel is leading the way in AI development.
Remember, you don’t need a PhD in computer science to explore this—it’s all about curiosity, creativity, and a dash of Semantic Kernel magic. 🌟✨
Ready to dive in? Check out the Semantic Kernel GitHub repository for the latest updates