by Joche Ojeda | Feb 4, 2026 | A.I
How GitHub Copilot Became My Sysadmin, Writer, and Creative Partner
When people talk about GitHub Copilot, they almost always describe it the same way: an AI that writes code.
That’s true—Copilot can write code—but treating it as “just a coding tool” is like calling a smartphone
“a device for making phone calls.”
The moment you start using Copilot inside Visual Studio Code, something important changes:
it stops being a code generator and starts behaving more like a context-aware work partner.
Not because it magically knows everything—but because VS Code gives it access to the things that matter:
your files, your folders, your terminals, your scripts, your logs, and even your remote machines.
That’s why this article isn’t about code autocomplete. It’s about the other side of Copilot:
the part that’s useful for people who are building, maintaining, writing, organizing, diagnosing, or shipping
real work—especially the messy kind.
Copilot as a Linux Server Sidekick
One of my most common uses for Copilot has nothing to do with application logic.
I use it for Linux server setup and diagnostics.
If you run Copilot in VS Code and you also use Remote development (SSH), you essentially get a workspace that can:
- Connect to Linux servers over SSH
- Edit remote configuration files safely
- Run commands and scripts in an integrated terminal
- Search through logs and system files quickly
- Manage folders like they’re local projects
That means Copilot isn’t “helping me code.” It’s helping me operate.
I often set up hosting and administration tools like Virtualmin or Webmin, or configure other infrastructure:
load balancers, web servers, SSL, firewall rules, backups—whatever the server needs to become stable and usable.
In those situations Copilot becomes the assistant that speeds up the most annoying parts:
the remembering, the searching, the cross-checking, and the “what does this error actually mean?”
What this looks like in practice
Instead of bouncing between browser tabs and old notes, I’ll use Copilot directly in the workspace:
- “Explain what this service error means and suggest the next checks.”
- “Read this log snippet and list the most likely causes.”
- “Generate a safe Nginx config for this domain layout.”
- “Create a hardening checklist for a fresh VPS.”
- “What would you verify before assuming this is a network issue?”
The benefit isn’t that Copilot is always right. The benefit is that it helps you move faster with less friction—
and it keeps your work inside the same place where the files and commands actually live.
Copilot as an Operations Brain (Not Just a Code Brain)
Here’s the real mental shift:
Copilot doesn’t need to write code to be useful. It needs context.
In VS Code, that context includes the entire workspace: configuration files, scripts, documentation, logs,
command history, and whatever you’re currently working on. Once you realize that, Copilot becomes useful for:
- Debugging infrastructure problems
- Translating “error messages” into “actionable steps”
- Drafting repeatable setup scripts
- Creating operational runbooks and checklists
- Turning tribal knowledge into documentation
It’s especially valuable when the work is messy and practical—when you’re not trying to invent something new,
you’re trying to make something work.
Copilot as a Writing Workspace
Now switch gears. One of the best non-coding Copilot stories I’ve seen is my cousin Alexandra.
She’s writing a small storybook.
She started the way a lot of people do: writing by hand, collecting pages, keeping ideas in scattered places.
At one point she was using Copilot through Microsoft Office, but I suggested a different approach:
Use VS Code as the creative workspace.
Not because VS Code is “a writing tool,” but because it gives you structure for free:
- A folder becomes the book
- Each chapter becomes a file
- Markdown becomes a simple, readable format
- Git (optionally) becomes version history
- Copilot becomes the editor, brainstormer, and consistency checker
In that setup, Copilot isn’t writing the story for you. It’s helping you shape it:
rewrite a paragraph, suggest alternatives, tighten dialogue, keep a consistent voice,
summarize a scene, or generate a few options when you’re stuck.
Yes, Even Illustrations (Within Reason)
This surprises people: you can also support simple illustrations inside a VS Code workspace.
Not full-on painting, obviously—but enough for many small projects.
VS Code can handle things like vector graphics (SVG), simple diagram formats, and text-driven visuals.
If you describe a scene, Copilot can help generate a starting SVG illustration, and you can iterate from there.
It’s not about replacing professional design—it’s about making it easier to prototype, experiment,
and keep everything (text + assets) together in one organized place.
The Hidden Superpower: VS Code’s Ecosystem
Copilot is powerful on its own. But its real strength comes from where it lives.
VS Code brings the infrastructure:
- Extensions for almost any workflow
- Remote development over SSH
- Integrated terminals and tasks
- Search across files and folders
- Versioning and history
- Cross-platform consistency
So whether you’re configuring a server, drafting a runbook, organizing a book, or building a folder-based project,
Copilot adapts because the workspace defines the context.
The Reframe
If there’s one idea worth keeping, it’s this:
GitHub Copilot is not a coding tool. It’s a general-purpose work companion that happens to be excellent at code.
Once you stop limiting it to source files, it becomes:
- A sysadmin assistant
- A documentation partner
- A creative editor
- A workflow accelerator
- A “second brain” inside the tools you already use
And the best part is that none of this requires a new platform or a new habit.
It’s the same VS Code workspace you already know—just used for more than code.
by Joche Ojeda | Jan 30, 2026 | C#, dotnet
There is a familiar moment in every developer’s life.
Memory usage keeps creeping up.
The process never really goes down.
After hours—or days—the application feels heavier, slower, tired.
And the conclusion arrives almost automatically:
“The framework has a memory leak.”
“That component library is broken.”
“The GC isn’t doing its job.”
It’s a comforting explanation.
It’s also usually wrong.
Memory Leaks vs. Memory Retention
In managed runtimes like .NET, true memory leaks are rare.
The garbage collector is extremely good at reclaiming memory.
If an object is unreachable, it will be collected.
What most developers call a “memory leak” is actually
memory retention.
- Objects are still referenced
- So they stay alive
- Forever
From the GC’s point of view, nothing is wrong.
From your point of view, RAM usage keeps climbing.
Why Frameworks Are the First to Be Blamed
When you open a profiler and look at what’s alive, you often see:
- UI controls
- ORM sessions
- Binding infrastructure
- Framework services
So it’s natural to conclude:
“This thing is leaking.”
But profilers don’t answer why something is alive.
They only show that it is alive.
Framework objects are usually not the cause — they are just sitting at the
end of a reference chain that starts in your code.
The Classic Culprit: Bad Event Wiring
The most common “mirage leak” is caused by events.
The pattern
- A long-lived publisher (static service, global event hub, application-wide manager)
- A short-lived subscriber (view, view model, controller)
- A subscription that is never removed
That’s it. That’s the leak.
Why it happens
Events are references.
If the publisher lives for the lifetime of the process, anything it
references also lives for the lifetime of the process.
Your object doesn’t get garbage collected.
It becomes immortal.
The Immortal Object: When Short-Lived Becomes Eternal
An immortal object is an object that should be short-lived
but can never be garbage collected because it is still reachable from a GC
root.
Not because of a GC bug.
Not because of a framework leak.
But because our code made it immortal.
Static fields, singletons, global event hubs, timers, and background services
act as anchors. Once a short-lived object is attached to one of these, it
stops aging.
GC Root
└── static / singleton / service
└── Event, timer, or callback
└── Delegate or closure
└── Immortal object
└── Large object graph
From the GC’s perspective, everything is valid and reachable.
From your perspective, memory never comes back down.
A Retention Dependency Tree That Cannot Be Collected
GC Root
└── static GlobalEventHub.Instance
└── GlobalEventHub.DataUpdated (event)
└── delegate → CustomerViewModel.OnDataUpdated
└── CustomerViewModel
└── ObjectSpace / DbContext
└── IdentityMap / ChangeTracker
└── Customer, Order, Invoice, ...
What you see in the memory dump:
- thousands of entities
- ORM internals
- framework objects
What actually caused it:
- one forgotten event unsubscription
The Lambda Trap (Even Worse, Because It Looks Innocent)
The code
public CustomerViewModel(GlobalEventHub hub)
{
hub.DataUpdated += (_, e) =>
{
RefreshCustomer(e.CustomerId);
};
}
This lambda captures this implicitly.
The compiler creates a hidden closure that keeps the instance alive.
“But I Disposed the Object!”
Disposal does not save you here.
- Dispose does not remove event handlers
- Dispose does not break static references
- Dispose does not stop background work automatically
IDisposable is a promise — not a magic spell.
Leak-Hunting Checklist
Reference Roots
- Are there static fields holding objects?
- Are singletons referencing short-lived instances?
- Is a background service keeping references alive?
Events
- Are subscriptions always paired with unsubscriptions?
- Are lambdas hiding captured references?
Timers & Async
- Are timers stopped and disposed?
- Are async loops cancellable?
Profiling
- Follow GC roots, not object counts
- Inspect retention paths
- Ask: who is holding the reference?
Final Thought
Frameworks rarely leak memory.
We do.
Follow the references.
Trust the GC.
Question your wiring.
That’s when the mirage finally disappears.
by Joche Ojeda | Jan 21, 2026 | Uncategorized
This is a story about testing XAF applications — and why now is finally the right time to do it properly.
With Copilot agents and AI-assisted coding, writing code has become cheaper and faster than ever. Features that used to take days now take hours. Boilerplate is almost free.
And that changes something important.
For the first time, many of us actually have time to do the things we always postponed:
- documenting the source code,
- writing proper user manuals,
- and — yes — writing tests.
But that immediately raises the real question:
What kind of tests should I even write?
Most developers use “unit tests” as a synonym for “tests”. But once you move beyond trivial libraries and into real application frameworks, that definition becomes fuzzy very quickly.
And nowhere is that more obvious than in XAF.
I’ve been working with XAF for something like 15–18 years (I’ve honestly lost count). It’s my preferred application framework, and it’s incredibly productive — but testing it “as-is” can feel like wrestling a framework-shaped octopus.
So let’s clarify something first.
You don’t test the framework. You test your logic.
XAF already gives you a lot for free:
- CRUD
- UI generation
- validation plumbing
- security system
- object lifecycle
- persistence
DevExpress has already tested those parts — thousands of times, probably millions by now.
So you do not need to write tests like:
- “Can ObjectSpace save an object?”
- “Does XAF load a View?”
- “Does the security system work?”
You assume those things work.
Your responsibility is different.
You test the decisions your application makes.
That principle applies to XAF — and honestly, to any serious application framework.
The mental shift: what is a “unit”, really?
In classic theory, a unit is the smallest piece of code with a single responsibility — usually a method.
In real applications, that definition is often too small to be useful.
Sometimes the real “unit” is:
- a workflow,
- a business decision,
- a state transition,
- or a rule spanning multiple objects.
In XAF especially, the decision matters more than the method.
That’s why the right question is not “how do I unit test XAF?”
The right question is:
Which decisions in my app are important enough to protect?
The test pyramid for XAF
A practical, realistic test pyramid for XAF looks like this:
- Fast unit tests for pure logic
- Unit tests with thin seams around XAF-specific dependencies
- Integration tests with a real ObjectSpace (confidence tests)
- Minimal UI tests only for critical wiring
Let’s go layer by layer.
1) Push logic out of XAF into plain services (fast unit tests)
This is the biggest win you’ll ever get.
The moment you move important logic out of:
- Controllers
- Rules
- ObjectSpace-heavy code
…testing becomes boring — and boring is good.
Put non-UI logic into:
- Domain services (e.g.
IInvoicePricingService)
- Use-case handlers (
CreateInvoiceHandler, PostInvoiceHandler)
- Pure methods (no ObjectSpace, no View, no security calls)
Now you can test with plain xUnit / NUnit and simple mocks or fakes.
What is a service?
A service is code that makes business decisions.
It answers questions like:
- “Can this invoice be posted?”
- “Is this discount valid?”
- “What is the total?”
- “Is the user allowed to approve this?”
A service:
- contains real logic
- is framework-agnostic
- is the thing you most want to unit test
If code decides why something happens, it belongs in a service.
2) Unit test XAF-specific logic with thin seams
Some logic will always touch XAF concepts. That’s fine.
The trick is not to eliminate XAF — it’s to isolate it.
You do that by introducing seams.
What is a seam?
A seam is a boundary where you can replace a real dependency with a fake one in a test.
A seam:
- usually contains no business logic
- exists mainly for testability
- is often an interface or wrapper
Common XAF seams:
ICurrentUser instead of SecuritySystem.CurrentUser
IClock instead of DateTime.Now
- repositories / unit-of-work instead of raw
IObjectSpace
IUserNotifier instead of direct UI calls
Seams don’t decide anything — they just let you escape the framework in tests.
What does “adapter” mean in XAF?
An adapter is a very thin class whose job is to:
- translate XAF concepts (View, ObjectSpace, Actions, Rules)
- into calls to your services and use cases
Adapters:
- contain little or no business logic
- are allowed to be hard to unit test
- exist to connect XAF to your code
Typical XAF adapters:
- Controllers
- Appearance Rules
- Validation Rules
- Action handlers
- Property setters that delegate to services
The adapter is not the brain.
The brain lives in services.
What should you test here?
- Appearance Rules
Test the decision behind the rule (e.g. “Is this field editable now?”).
Then confirm via integration tests that the rule is wired correctly.
- Validation Rules
Test the validation logic itself (conditions, edge cases).
Optionally verify that the XAF rule triggers when expected.
- Calculated properties / non-trivial setters
- Controller decision logic once extracted from the Controller
3) Integration tests with a real ObjectSpace (confidence tests)
Unit tests prove your logic is correct.
Integration tests prove your XAF wiring still behaves.
They answer questions like:
- Does persistence work?
- Do validation and appearance rules trigger?
- Do lifecycle hooks behave?
- Does security configuration work as expected?
4) Minimal UI tests (only for critical wiring)
UI automation is expensive and fragile.
Keep UI tests only for:
- Critical actions
- Essential navigation flows
- Known production regressions
The key mental model
A rule is not the unit.
The decision behind the rule is the unit.
Test the decision directly.
Use integration tests to confirm the glue still works.
Closing thought
Test your app’s decisions, not the framework’s behavior.
That’s the difference between a test suite that helps you move faster
and one that quietly turns into a tax.
by Joche Ojeda | Jan 15, 2026 | DLLs, netframework
Most .NET developers eventually face it.
A project that targets .NET Framework 4.7.2, uses video and audio components, depends on vendor SDKs, and mixes managed code, native DLLs, and legacy decisions.
In other words: a brownfield project.
This is the kind of system that still runs real businesses, even if it doesn’t fit neatly into modern slides about containers and self-contained deployments.
And it’s also where many developers discover — usually the hard way — that deployment is not just copying the Release folder and hoping for the best.
The Myth: “Just Copy the EXE”
I’ve seen this mindset for years:
“It works on my machine. Just copy the EXE and the DLLs to the client.”
Sometimes it works. Often it doesn’t.
And when it fails, it fails in the most frustrating ways:
- Silent crashes
- Missing assembly errors
- COM exceptions that appear only on client machines
- Video or audio features that break minutes after startup
The real issue isn’t the DLL.
The real issue is that most developers don’t understand how .NET Framework actually resolves assemblies.
How I Learned This the Hard Way (XPO + Pervasive, 2006)
The first time I truly understood this was around 2006, while writing a custom XPO provider for Pervasive 7.
At the time, the setup was fairly typical:
- A .NET Framework application
- Using DevExpress XPO
- Talking to Pervasive SQL
- The Pervasive .NET provider lived under Program Files
- It was not registered in the GAC
On my development machine, everything worked.
On another machine? File not found. Or worse: a crash when XPO tried to initialize the provider.
The “fix” everyone used back then was almost ritual:
“Copy the Pervasive provider DLL into the same folder as the EXE.”
And suddenly… it worked.
That was my first real encounter with assembly probing — even though I didn’t know the name yet.
How Assembly Resolution Really Works in .NET Framework
.NET Framework does not scan your disk.
It does not care that a DLL exists under Program Files.
It follows a very strict resolution order.
1. Already Loaded Assemblies
If the assembly is already loaded in the current AppDomain, the CLR reuses it.
Simple.
2. Application Base Directory
Next, the CLR looks in the directory where the EXE lives.
This single rule explains years of “just copy the DLL next to the EXE” folklore.
In the Pervasive case, copying the provider locally worked because it entered the application base probing path.
3. Private Probing Paths
This is where things get interesting.
In app.config, you can extend the probing logic:
<runtime>
<assemblyBinding xmlns="urn:schemas-microsoft-com:asm.v1">
<probing privatePath="lib;providers;drivers" />
</assemblyBinding>
</runtime>
This tells the runtime:
“If you don’t find the assembly in the EXE folder, also look in these subfolders.”
Important details many developers miss:
- Paths are relative to the EXE
- No recursive search
- Every folder must be explicitly listed
4. Global Assembly Cache (GAC)
Only after probing the application paths does the CLR look in the GAC, and only if:
- The assembly is strong-named
- The reference includes that strong name
Two common misconceptions:
- A DLL being “installed on the system” does not matter
- Non–strong-named assemblies are never loaded from the GAC
5. AssemblyResolve: The Last-Chance Hook
If the CLR cannot find an assembly using any of the rules above, it fires:
AppDomain.CurrentDomain.AssemblyResolve
This happens at runtime, exactly when the assembly is needed.
That’s why:
- The app may start fine
- The crash happens later
- Video or database features fail “randomly”
Why Video and Audio Projects Amplify the Pain
Projects that deal with video codecs, audio pipelines, hardware acceleration, and vendor SDKs are especially vulnerable because:
- Assemblies load late
- Managed code pulls native DLLs
- Bitness matters (x86 vs x64)
- Licensing logic often lives outside managed code
The failure doesn’t happen at startup. It happens when the feature is first used.
The Final Step: Building a Real Installer
Eventually, I stopped pretending that copying files was enough.
I built a proper installer.
Even though today I often use the Visual Studio Installer Projects extension, for this legacy application I went with a WiX-based installer. Not because it was fashionable — but because it forced me to be explicit.
An installer asks uncomfortable questions:
- Which assemblies belong in the GAC?
- Which must live next to the EXE?
- Which native DLLs must be deployed together?
- Which dependencies only worked because Visual Studio copied them silently?
I had to inspect every file I was adding and make a conscious decision:
- Shared, strong-named → GAC
- App-local or version-sensitive → EXE folder
- Native dependencies → exact placement matters
The installer didn’t magically fix the application.
It revealed the truth about it.
The Real Lesson of Brownfield Work
Legacy projects don’t fail because they’re old.
They fail because nobody understands them anymore.
Once you understand assembly probing, GAC rules, runtime loading, and deployment boundaries, brownfield systems stop being mysterious.
They become predictable.
What’s Next: COM (Yes, That COM)
This application doesn’t stop at managed assemblies.
It also depends heavily on COM components.
The next article will focus entirely on that world: what COM components really are, why they survived for decades, and how to work with them safely as a .NET developer.
If assembly probing was the first reality check, COM is the one that separates “it runs on my machine” from “this can be deployed.”
by Joche Ojeda | Jan 12, 2026 | A.I, Copilot
I recently listened to an episode of the Merge Conflict podcast by James Montemagno and Frank Krueger where a topic came up that, surprisingly, I had never explicitly framed before: greenfield vs brownfield projects.
That surprised me—not because the ideas were new, but because I’ve spent years deep in software architecture and AI, and yet I had never put a name to something I deal with almost daily.
Once I did a bit of research (and yes, asked ChatGPT too), everything clicked.
Greenfield and Brownfield, in Simple Terms
- Greenfield projects are built from scratch. No legacy code, no historical baggage, no technical debt.
- Brownfield projects already exist. They carry history: multiple teams, different styles, shortcuts, and decisions made under pressure.
If that sounds abstract, here’s the practical version:
Greenfield is what we want.
Brownfield is what we usually get.
Greenfield Projects: Architecture Paradise
In a greenfield project, everything feels right.
You can choose your architecture and actually stick to it. If you’re building a .NET MAUI application, you can start with proper MVVM, SOLID principles, clean boundaries, and consistent conventions from day one.
As developers, we know how things should be done. Greenfield projects give us permission to do exactly that.
They’re also extremely friendly to AI tools.
When the rules are clear and consistent, Copilot and AI agents perform beautifully. You can define specs, outline patterns, and let the tooling do a lot of the repetitive work for you.
That’s why I often use AI for greenfield projects as internal tools or side projects—things I’ve always known how to build, but never had the time to prioritize. Today, time is no longer the constraint. Tokens are.
Brownfield Projects: Welcome to Reality
Then there’s the real world.
At the office, we work with applications that have been touched by many hands over many years—sometimes 10 different teams, sometimes freelancers, sometimes “someone’s cousin who fixed it once.”
Each left behind a different style, different patterns, and different assumptions.
Customers often describe their systems like this:
“One team built it, another modified it, then my cousin fixed a bug, then my cousin got married and stopped helping, and then someone else took over.”
And yet—the system works.
That’s an important reminder.
The main job of software is not to be beautiful. It’s to do the job.
A lot of brownfield systems are ugly, fragile, and terrifying to touch—but they deliver real business value every single day.
Why AI Is Even More Powerful in Brownfield Projects
Here’s my honest opinion, based on experience:
AI is even more valuable in brownfield projects than in greenfield ones.
I’ve modernized six or seven legacy applications so far—codebases that everyone was afraid to touch. AI made that possible.
Legacy systems are mentally expensive. Reading spaghetti code drains energy. Understanding implicit behavior takes time. Humans get tired.
AI doesn’t.
It will patiently analyze a 2,000-line class without complaining.
Take Windows Forms applications as an example. It’s old technology, easy to forget, and full of quirks. Copilot can generate code that I know how to write—but much faster than I could after years away from WinForms.
Even more importantly, AI makes it far easier to introduce tests into systems that never had them:
- Add tests class by class
- Mock dependencies safely
- Lock in existing behavior before refactoring
Historically, this was painful enough that many teams preferred a full rewrite.
But rewrites have a hidden cost: every rewritten line introduces new bugs.
AI allows us to modernize in place—incrementally and safely.
Clean Code and Business Value
This is the real win.
With AI, we no longer have to choose between:
- “The code works, but don’t touch it”
- “The code is beautiful, but nothing works yet”
We can improve structure, readability, and testability without breaking what already delivers value.
Greenfield projects are still fun. They’re great for experimentation and clean design.
But brownfield projects? That’s where AI feels like a superpower.
Final Thoughts
Today, I happily use AI in both worlds:
- Greenfield projects for fast experimentation and internal tooling
- Brownfield projects for rescuing legacy systems, adding tests, and reducing technical debt
AI doesn’t replace experience—it amplifies it.
Especially when dealing with systems held together by history, habits, and just enough hope to keep running.
And honestly?
Those are the projects where the impact feels the most real.