AI-Powered XtraReports in XAF: Unlocking DevExpress Enhancements

AI-Powered XtraReports in XAF: Unlocking DevExpress Enhancements

Today is Friday, so I decided to take it easy with my integration research. When I woke up, I decided that I just wanted to read the source code of DevExpress AI integrations to get inspired. I began by reading the official blog post about AI and reporting (DevExpress Blog Post). Then, as usual, I proceeded to fork the repository to make my own modifications.

After completing the typical cloning procedure in Visual Studio, I realized that to use the AI functionalities of XtraReport, you don’t need any special version of the report viewer.

The only requirement is to have the NuGet reference as shown below:


    <ItemGroup>
        <PackageReference Include="DevExpress.AIIntegration.Blazor.Reporting.Viewer" Version="24.2.1-alpha-24260" />
    </ItemGroup>
    

Then, add the report integration as shown below:


    config.AddBlazorReportingAIIntegration(config =>
    {
        config.SummarizeBehavior = SummarizeBehavior.Abstractive;
        config.AvailableLanguages = new List<LanguageItem>
        {
            new LanguageItem { Key = "de", Text = "German" },
            new LanguageItem { Key = "es", Text = "Spanish" },
            new LanguageItem { Key = "en", Text = "English" },
            new LanguageItem { Key = "ru", Text = "Russian" },
            new LanguageItem { Key = "it", Text = "Italian" }
        };
    });
    

After completing these steps, your report viewer will display a little star in the options menu, where you can invoke the AI operations.

You can find the source code for this example in my GitHub repository: https://github.com/egarim/XafSmartEditors

Till next time, XAF out!!!

The New Era of Smart Editors: Creating a RAG system using XAF and the new Blazor chat component

The New Era of Smart Editors: Creating a RAG system using XAF and the new Blazor chat component

The New Era of Smart Editors: Developer Express and AI Integration

The new era of smart editors is already here. Developer Express has introduced AI functionality in many of their controls for .NET (Windows Forms, Blazor, WPF, MAUI).

This advancement will eventually come to XAF, but in the meantime, here at XARI, we are experimenting with XAF integrations to add value to our customers.

In this article, we are going to integrate the new chat component into an XAF application, and our first use case will be RAG (Retrieval-Augmented Generation). RAG is a system that combines external data sources with AI-generated responses, improving accuracy and relevance in answers by retrieving information from a document set or knowledge base and using it in conjunction with AI predictions.

To achieve this integration, we will follow the steps outlined in this tutorial:

Implement a Property Editor Based on Custom Components (Blazor)

Implementing the Property Editor

When I implement my own property editor, I usually avoid doing so for primitive types because, in most cases, my property editor will need more information than a simple primitive value. For this implementation, I want to handle a custom value in my property editor. I typically create an interface to represent the type, ensuring compatibility with both XPO and EF Core.

namespace XafSmartEditors.Razor.RagChat
{
    public interface IRagData
    {
        Stream FileContent { get; set; }
        string Prompt { get; set; }
        string FileName { get; set; }
    }
}

Non-Persistent Implementation

After defining the type for my editor, I need to create a non-persistent implementation:

namespace XafSmartEditors.Razor.RagChat
{
    [DomainComponent]
    public class IRagDataImp : IRagData, IXafEntityObject, INotifyPropertyChanged
    {
        private void OnPropertyChanged([CallerMemberName] string propertyName = null)
        {
            PropertyChanged?.Invoke(this, new PropertyChangedEventArgs(propertyName));
        }

        public IRagDataImp()
        {
            Oid = Guid.NewGuid();
        }

        [DevExpress.ExpressApp.Data.Key]
        [Browsable(false)]  
        public Guid Oid { get; set; }

        private string prompt;
        private string fileName;
        private Stream fileContent;

        public Stream FileContent
        {
            get => fileContent;
            set
            {
                if (fileContent == value) return;
                fileContent = value;
                OnPropertyChanged();
            }
        }

        public string FileName
        {
            get => fileName;
            set
            {
                if (fileName == value) return;
                fileName = value;
                OnPropertyChanged();
            }
        }
        
        public string Prompt
        {
            get => prompt;
            set
            {
                if (prompt == value) return;
                prompt = value;
                OnPropertyChanged();
            }
        }

        // IXafEntityObject members
        void IXafEntityObject.OnCreated() { }
        void IXafEntityObject.OnLoaded() { }
        void IXafEntityObject.OnSaving() { }

        public event PropertyChangedEventHandler PropertyChanged;
    }
}

Creating the Blazor Chat Component

Now, it’s time to create our Blazor component and add the new DevExpress chat component for Blazor:

<DxAIChat CssClass="my-chat" Initialized="Initialized" 
          RenderMode="AnswerRenderMode.Markdown" 
          UseStreaming="true"
          SizeMode="SizeMode.Medium">
    <EmptyMessageAreaTemplate>
        <div class="my-chat-ui-description">
            <span style="font-weight: bold; color: #008000;">Rag Chat</span> Assistant is ready to answer your questions.
        </div>
    </EmptyMessageAreaTemplate>
    <MessageContentTemplate>
        <div class="my-chat-content">
            @ToHtml(context.Content)
        </div>
    </MessageContentTemplate>
</DxAIChat>

@code {
    IRagData _value;
    [Parameter]
    public IRagData Value
    {
        get => _value;
        set => _value = value;
    }
    
    async Task Initialized(IAIChat chat)
    {
        await chat.UseAssistantAsync(new OpenAIAssistantOptions(
            this.Value.FileName,
            this.Value.FileContent,
            this.Value.Prompt
        ));
    }

    MarkupString ToHtml(string text)
    {
        return (MarkupString)Markdown.ToHtml(text);
    }
}

The main takeaway from this component is that it receives a parameter named Value of type IRagData, and we use this value to initialize the IAIChat service in the Initialized method.

Creating the Component Model

With the interface and domain component in place, we can now create the component model to communicate the value of our domain object with the Blazor component:

namespace XafSmartEditors.Razor.RagChat
{
    public class RagDataComponentModel : ComponentModelBase
    {
        public IRagData Value
        {
            get => GetPropertyValue<IRagData>();
            set => SetPropertyValue(value);
        }

        public EventCallback<IRagData> ValueChanged
        {
            get => GetPropertyValue<EventCallback<IRagData>>();
            set => SetPropertyValue(value);
        }

        public override Type ComponentType => typeof(RagChat);
    }
}

Creating the Property Editor

Finally, let’s create the property editor class that serves as a bridge between XAF and the new component:

namespace XafSmartEditors.Blazor.Server.Editors
{
    [PropertyEditor(typeof(IRagData), true)]
    public class IRagDataPropertyEditor : BlazorPropertyEditorBase, IComplexViewItem
    {
        private IObjectSpace _objectSpace;
        private XafApplication _application;

        public IRagDataPropertyEditor(Type objectType, IModelMemberViewItem model) : base(objectType, model) { }

        public void Setup(IObjectSpace objectSpace, XafApplication application)
        {
            _objectSpace = objectSpace;
            _application = application;
        }

        public override RagDataComponentModel ComponentModel => (RagDataComponentModel)base.ComponentModel;

        protected override IComponentModel CreateComponentModel()
        {
            var model = new RagDataComponentModel();

            model.ValueChanged = EventCallback.Factory.Create<IRagData>(this, value =>
            {
                model.Value = value;
                OnControlValueChanged();
                WriteValue();
            });

            return model;
        }

        protected override void ReadValueCore()
        {
            base.ReadValueCore();
            ComponentModel.Value = (IRagData)PropertyValue;
        }

        protected override object GetControlValueCore() => ComponentModel.Value;

        protected override void ApplyReadOnly()
        {
            base.ApplyReadOnly();
            ComponentModel?.SetAttribute("readonly", !AllowEdit);
        }
    }
}

Bringing It All Together

Now, let’s create a domain object that can feed the content of a file to our chat component:

namespace XafSmartEditors.Module.BusinessObjects
{
    [DefaultClassOptions]
    public class PdfFile : BaseObject
    {
        public PdfFile(Session session) : base(session) { }

        string prompt;
        string name;
        FileData file;

        public FileData File
        {
            get => file;
            set => SetPropertyValue(nameof(File), ref file, value);
        }

        public string Name
        {
            get => name;
            set => SetPropertyValue(nameof(Name), ref name, value);
        }

        public string Prompt
        {
            get => prompt;
            set => SetPropertyValue(nameof(Prompt), ref prompt, value);
        }
    }
}

Creating the Controller

We are almost done! Now, we need to create a controller with a popup action:

namespace XafSmartEditors.Module.Controllers
{
    public class OpenChatController : ViewController
    {
        Popup

WindowShowAction Chat;

        public OpenChatController()
        {
            this.TargetObjectType = typeof(PdfFile);
            Chat = new PopupWindowShowAction(this, "ChatAction", "View");
            Chat.Caption = "Chat";
            Chat.ImageName = "artificial_intelligence";
            Chat.Execute += Chat_Execute;
            Chat.CustomizePopupWindowParams += Chat_CustomizePopupWindowParams;
        }

        private void Chat_Execute(object sender, PopupWindowShowActionExecuteEventArgs e) { }

        private void Chat_CustomizePopupWindowParams(object sender, CustomizePopupWindowParamsEventArgs e)
        {
            PdfFile pdfFile = this.View.CurrentObject as PdfFile;
            var os = this.Application.CreateObjectSpace(typeof(ChatView));
            var chatView = os.CreateObject<ChatView>();

            MemoryStream memoryStream = new MemoryStream();
            pdfFile.File.SaveToStream(memoryStream);
            memoryStream.Seek(0, SeekOrigin.Begin);

            chatView.RagData = os.CreateObject<IRagDataImp>();
            chatView.RagData.FileName = pdfFile.File.FileName;
            chatView.RagData.Prompt = !string.IsNullOrEmpty(pdfFile.Prompt) ? pdfFile.Prompt : DefaultPrompt;
            chatView.RagData.FileContent = memoryStream;

            DetailView detailView = this.Application.CreateDetailView(os, chatView);
            detailView.Caption = $"Chat with Document | {pdfFile.File.FileName.Trim()}";

            e.View = detailView;
        }
    }
}

Conclusion

That’s everything we need to create a RAG system using XAF and the new DevExpress Chat component. You can find the complete source code here: GitHub Repository.

If you want to meet and discuss AI, XAF, and .NET, feel free to schedule a meeting: Schedule a Meeting.

Until next time, XAF out!

Integrating DevExpress Chat Component with Semantic Kernel: A Step-by-Step Guide

Integrating DevExpress Chat Component with Semantic Kernel: A Step-by-Step Guide

Are you excited to bring powerful AI chat completions to your web application? I sure am! In this post, we’ll walk through how to integrate the DevExpress Chat component with the Semantic Kernel using OpenAI. This combination can make your app more interactive and intelligent, and it’s surprisingly simple to set up. Let’s dive in!

Step 1: Adding NuGet Packages

First, let’s ensure we have all the necessary packages. Open your DevExpress.AI.Samples.Blazor.csproj file and add the following NuGet references:

 <ItemGroup>
<PackageReference Include="Microsoft.KernelMemory.Abstractions" Version="0.78.241007.1" />
<PackageReference Include="Microsoft.KernelMemory.Core" Version="0.78.241007.1" />
<PackageReference Include="Microsoft.SemanticKernel" Version="1.21.1" />
</ItemGroup>

 

This will bring in the core components of Semantic Kernel to power your chat completions.

Step 2: Setting Up Your Kernel in Program.cs

Next, we’ll configure the Semantic Kernel and OpenAI integration. Add the following code in your Program.cs to create the kernel and set up the chat completion service:


    //Create your OpenAI client
    string OpenAiKey = Environment.GetEnvironmentVariable("OpenAiTestKey");
    var client = new OpenAIClient(new System.ClientModel.ApiKeyCredential(OpenAiKey));

    //Adding semantic kernel
    var KernelBuilder = Kernel.CreateBuilder();
    KernelBuilder.AddOpenAIChatCompletion("gpt-4o", client);
    var sk = KernelBuilder.Build();
    var ChatService = sk.GetRequiredService<IChatCompletionService>();
    builder.Services.AddSingleton<IChatCompletionService>(ChatService);
    

This step is crucial because it connects your app to OpenAI via the Semantic Kernel and sets up the chat completion service that will drive the AI responses in your chat.

Step 3: Creating the Chat Component

Now that we’ve got our services ready, it’s time to set up the chat component. We’ll define the chat interface in our Razor page. Here’s how you can do that:

Razor Section:


    @page "/sk"
    @using DevExpress.AIIntegration.Blazor.Chat
    @using AIIntegration.Services.Chat;
    @using Microsoft.SemanticKernel.ChatCompletion
    @using System.Diagnostics
    @using System.Text.Json
    @using System.Text

    

    @inject IChatCompletionService chatCompletionsService;
    @inject IJSRuntime JSRuntime;
    

This UI will render a clean chat interface using DevExpress’s DxAIChat component, which is connected to our Semantic Kernel chat completion service.

Code Section:

Now, let’s handle the interaction logic. Here’s the code that powers the chat backend:


    @code {

        ChatHistory ChatHistory = new ChatHistory();

        async Task MessageSent(MessageSentEventArgs args)
        {
            // Add the user's message to the chat history
            ChatHistory.AddUserMessage(args.Content);

            // Get a response from the chat completion service
            var Result = await chatCompletionsService.GetChatMessageContentAsync(ChatHistory);

            // Extract the response content
            string MessageContent = Result.InnerContent.ToString();
            Debug.WriteLine("Message from chat completion service:" + MessageContent);

            // Add the assistant's message to the history
            ChatHistory.AddAssistantMessage(MessageContent);

            // Send the response to the UI
            var message = new Message(MessageRole.Assistant, MessageContent);
            args.SendMessage(message);
        }
    }
    

With this in place, every time the user sends a message, the chat completion service will process the conversation history and generate a response from OpenAI. The result is then displayed in the chat window.

Step 4: Run Your Project

Before running the project, ensure that the correct environment variable for the OpenAI key is set (OpenAiTestKey). This key is necessary for the integration to communicate with OpenAI’s API.

Now, you’re ready to test! Simply run your project and navigate to https://localhost:58108/sk. Voilà! You’ll see a beautiful, AI-powered chat interface waiting for your input. 🎉

Conclusion

And that’s it! You’ve successfully integrated the DevExpress Chat component with the Semantic Kernel for AI-powered chat completions. Now, you can take your user interaction to the next level with intelligent, context-aware responses. The possibilities are endless with this integration—whether you’re building a customer support chatbot, a productivity assistant, or something entirely new.

Let me know how your integration goes, and feel free to share what cool things you build with this!

here is the full implementation GitHub

Test Driving DevExpress Chat Component

Test Driving DevExpress Chat Component

If you’re a Blazor developer looking to integrate AI-powered chat functionality into your applications, the new DevExpress DxAIChat component offers a turnkey solution. It’s designed to make building chat interfaces as easy as possible, with out-of-the-box support for simple chats, virtual assistants, and even Retrieval-Augmented Generation (RAG) scenarios.

The best part? You don’t have to start from scratch—DevExpress provides a range of pre-built examples, making it easy to get started and customize to your needs. Whether you’re aiming for a basic chatbot or a more complex AI assistant, this component has you covered.

To use the examples you can use any open A.I compatible service like Ollama, Open A.I and Azure OpenAI, in current devexpress example they use Azure like this

using DevExpress.AIIntegration;
...
string azureOpenAIEndpoint = Environment.GetEnvironmentVariable("AZURE_OPENAI_ENDPOINT");
string azureOpenAIKey = Environment.GetEnvironmentVariable("AZURE_OPENAI_API_KEY");
string deploymentName = "YourModelDeploymentName"
...
builder.Services.AddDevExpressBlazor();
builder.Services.AddDevExpressAI((config) => {
    config.RegisterChatClientOpenAIService
        new AzureOpenAIClient(
            new Uri(azureOpenAIEndpoint),
            new AzureKeyCredential(azureOpenAIKey)
        ), deploymentName);
    //or call the following method to use self-hosted Ollama models
    //config.RegisterChatClientOllamaAIService("http://localhost:11434/api/chat", "llama3.1");
});

I tested with OpenA.I  API instead of azure, so my code looks like this

string azureOpenAIEndpoint = Environment.GetEnvironmentVariable("AZURE_OPENAI_ENDPOINT");
string azureOpenAIKey = Environment.GetEnvironmentVariable("AZURE_OPENAI_API_KEY");

string OpenAiKey= Environment.GetEnvironmentVariable("OpenAiTestKey");

builder.Services.AddDevExpressBlazor();
builder.Services.AddDevExpressAI((config) => {
    //var client = new AzureOpenAIClient(
    //    new Uri(azureOpenAIEndpoint),
    //    new AzureKeyCredential(azureOpenAIKey));
    //Open Ai models ID are a bit different than azure, Azure=gtp4o OpenAI=gpt-4o
    var client = new OpenAIClient(new System.ClientModel.ApiKeyCredential(OpenAiKey));
    config.RegisterChatClientOpenAIService(client, "gpt-4o");
    config.RegisterOpenAIAssistants(client, "gpt-4o");
});

 

Notice the IDs of the models in Azure and Open A.I are different

  • Azure=gtp4o
  • OpenAI=gpt-4o

This are the URLs for the different example

  • Chat : https://localhost:53340/
  • Assistant/RAG: https://localhost:53340/assistant
  • Streaming: https://localhost:53340/streaming

I’m super happy that DevExpress components are doing all the heavy lifting and boilerplate code for us, I have developed the same scenarios using semantic kernel even when there is not so much code to write you still have the challenge of develop a responsive U.I

For more information and to see the examples in action, check out the full article.

Memory Types in Semantic Kernel

Memory Types in Semantic Kernel

In the world of AI and large language models (LLMs), understanding how to manage memory is crucial for creating applications that feel responsive and intelligent. Many developers are turning to Semantic Kernel, a lightweight and open-source development kit, to integrate these capabilities into their applications. For those already familiar with Semantic Kernel, let’s dive into how memory functions within this framework, especially when interacting with LLMs via chat completions.

Chat Completions: The Most Common Interaction with LLMs

When it comes to interacting with LLMs, one of the most intuitive and widely used methods is through chat completions. This allows developers to simulate a conversation between a user and an AI agent, facilitating various use cases like building chatbots, automating business processes, or even generating code.

In Semantic Kernel, chat completions are implemented through models from popular providers like OpenAI, Google, and others. These models enable developers to manage the flow of conversation seamlessly. While using chat completions, one key aspect to keep in mind is how the conversation history is stored and managed.

Temporary Memory: ChatHistory and Kernel String Arguments

Within the Semantic Kernel framework, the memory that a chat completion model uses is managed by the ChatHistory object. This object stores the conversation history temporarily, meaning it captures the back-and-forth between the user and the model during an active session. Alternatively, you can use a string argument passed to the kernel, which contains context information for the conversation. However, like the ChatHistory, this method is also not persistent.

Once the host class is disposed of, all stored context and memory from both the ChatHistory object and the string argument are lost. This transient nature of memory means that these methods are useful only for short-term interactions and are destroyed after the session ends.

What’s Next? Exploring Long-Term Memory Options

In this article, we’ve discussed how Semantic Kernel manages short-term memory with ChatHistory and kernel string arguments. However, for more complex applications that require retaining memory over longer periods—think customer support agents or business process automation—temporary memory might not be sufficient. In the next article, we’ll explore the options available for implementing long-term memory within Semantic Kernel, providing insights on how to make your AI applications even more powerful and context-aware.

Stay tuned for the deep dive into long-term memory solutions!