Category Archives: C#

Local RAG with .NET, PostgreSQL, and Ollama: Code Setup (Part 3)

In Part 1 and Part 2, we set up our infrastructure: a PostgreSQL database with pgvector for storing embeddings, and Ollama running the Phi4 model locally. Now we’ll create a .NET application to bring these components together into a working RAG system. We’ll start by setting up our solution structure with proper test projects.

If you’re just joining, you’ll need to follow the steps in Parts 1 and 2 first to get PostgreSQL and Ollama running in Docker.

Let’s start by creating the .NET solution and projects from the command line (we could do this through Visual Studio’s UI, but I prefer keeping my command-line-fu strong).

Creating a new .NET project

Open PowerShell as an administrator. Browse to a directory where you’d like to create your solution – I keep mine in my Git repositories folder. Then run these commands:

First, create the solution

dotnet new sln -n LocalRagConsoleDemo

You might see a prompt about the dev certificate – if so, run:

dotnet dev-certs https --trust

Now create our three projects: the main console app and two test projects

dotnet new console -n LocalRagConsoleDemo
dotnet new nunit -n LocalRagConsoleUnitTests
dotnet new nunit -n LocalRagConsoleIntegrationTests

Finally, add everything to the solution, organizing tests in their own folder

dotnet sln LocalRagConsoleDemo.sln add LocalRagConsoleDemo/LocalRagConsoleDemo.csproj

dotnet sln LocalRagConsoleDemo.sln add LocalRagConsoleUnitTests/LocalRagConsoleUnitTests.csproj --solution-folder "Unit Tests"

dotnet sln LocalRagConsoleDemo.sln add LocalRagConsoleIntegrationTests/LocalRagConsoleIntegrationTests.csproj --solution-folder "Unit Tests"

Now we have a solution set up with proper separation of concerns – a main console project for our RAG application and separate projects for unit and integration tests. In the next section, we’ll add the required NuGet packages and start building our connection to PostgreSQL.

You can verify everything is set up correctly by opening LocalRagConsoleDemo.sln in Visual Studio or your preferred IDE. You should see:

Two test projects organized in a “Unit Tests” solution folder and a main console project.

Adding Dependencies

Navigate to the LocalRagConsoleDemo project folder and add our initial NuGet packages:

dotnet add package Microsoft.Extensions.Http
dotnet add package Newtonsoft.Json

At this point, you should have a working solution structure:

  • A main console project
  • Two test projects in a “Unit Tests” solution folder
  • Basic HTTP and JSON handling capabilities

You can verify the setup by opening LocalRagConsoleDemo.sln in Visual Studio or your preferred IDE.

What’s Next?
I’ll be back soon with a detailed walkthrough of building the RAG application. In the meantime, you can check out the working code at:

https://github.com/jimsowers/LocalRAGConsoleDemo

Happy Coding!
-Jim

Local RAG with .NET, Postgres, and Ollama: Postgres with pgvector Setup (Part 1)

Step 1: Set Up Postgres Locally with the pgvector Extension

In this series, we’ll build a RAG (Retrieval-Augmented Generation) application that runs completely on your local machine. RAG systems use AI to answer questions based on your specific documents or data. The first component we need is a vector database to store and search through our document embeddings1. We’ll use Postgreswith the pgvector extension, which adds support for vector similarity search.

To make things easy to start, I am going to use a Docker container to run PostgreSQL with the pgvector extension instead of installing and configuring Postgres locally. This makes sure we all have the same setup and avoids configuration issues across different operating systems.

To make it easy to start, I am going to use a Docker container to run Postgres with the pgvector extension instead of installing and configuring it locally.

In a powershell prompt, I pull the image:

docker pull pgvector/pgvector:pg16

This pulled the code from docker hub here : https://hub.docker.com/r/pgvector/pgvector

At the time or this post, I got an image ‘pgvector/pgvector’ with tag pg16. You can see this by running this command in PowerShell:

docker images

Once you have the image, you can run it – starting the container with this command:

docker run -d --name postgres__with_pgvector -e POSTGRES_PASSWORD=password99 -e POSTGRES_USER=postgres -e POSTGRES_DB=vectordb -p 5432:5432 pgvector/pgvector:pg16

You can also do this start-up visually using Docker Desktop if you prefer:

Click on the images link on the left menu on Docker Desktop and then click the run triangle next to the image you just downloaded. You will put the parameters in the command line above into environment variables like this:

Click on the Containers cube link on the left and you will see your container is running.

Next, you can connect to postgres in the container with the command:

 docker exec -it postgres_with_pgvector psql -U postgres

That should give you a postgres prompt ‘postgres=#’

Let’s test that everything is connected and working now with these commands (hitting enter after each line):

CREATE EXTENSION IF NOT EXISTS vector;
CREATE TABLE items (
    id bigserial PRIMARY KEY,
    embedding vector(3)
);
INSERT INTO items (embedding) VALUES ('[1,2,3]');
SELECT * FROM items;

The select should return:

 id | embedding
----+-----------
  1 | [1,2,3]
(1 row)

To get out of the sql prompt, type ‘\q’

Let’s review what we’ve accomplished:

  • Postgres is running in a Docker container
  • The pgvector extension is installed and working
  • We’ve verified we can store and retrieve vector data

In Part 2, we’ll set up Ollama to run an AI model locally. This will allow us to generate the vector embeddings that we’ll store in our Postgres database. Then in Part 3, we’ll create a .NET application that brings these components together into a complete RAG system.

If you need to stop the container, you can use:

docker stop postgres__with_pgvector

Or use Docker Desktop to stop it. Your data will persist for next time.

See you in Part 2!
-Jim

  1. Embeddings are mathematical representations of objects like text, images, and audio. They are used by machine learning (ML) and artificial intelligence (AI) systems to understand complex relationships in data.  ↩︎

Extension method to trim all string fields in a C# object

/// <summary>
/// This does NOT go into sub objects, only the top level object
/// i.e. if you have a class with a string field, it will trim that string field value
/// but not trim any string fields on sub-objects inside the containing class
/// </summary>
/// <param name="currentObject"></param>
        public static void SafeTrimAllStringFields(this object currentObject)
        {
            if (currentObject == null)
            {
                return;
            }

            var type = currentObject.GetType();
            var stringFields = type.GetFields(BindingFlags.Instance | BindingFlags.Public | BindingFlags.NonPublic)
                .Where(f => f.FieldType == typeof(string));

            foreach (var field in stringFields)
            {
                var value = (string)field.GetValue(currentObject);
                if (value != null)
                {
                    field.SetValue(currentObject, value.SafeTrim());
                }
            }
        }
 }

   public static string SafeTrim(this string value)
        {
            return value == null ? string.Empty : value.Trim();
        }

Reading hidden field values with Selenium

I have a need to hide a guid used for an index on a .cshtml so I can get model binding on controls dynamically added with ajax (it’s a long story) (that was a mouthful)

I had a hard time finding the value in the hidden field; as it turns out, you can just get it by the attribute on the Selenium element like this:


IWebElement hiddenIndex = driver.FindElement(By.Id("MyControlName_0__Index"));
var indexValueToUse = hiddenIndex.GetAttribute("value");

 

Implementing reCAPTCHA in a Razor MVC view

Setup in your Google account

You will have to have a Google account to use reCAPTCHA.

Log into the google account you want to tie your reCAPTCHA keys to and navigate to:

https://www.google.com/recaptcha/admin#list

Under the ‘Register a new site’ section of that page,follow the instructions and set up a separate key set for each of your development,test, and production environments – including one specifically for ‘localhost’ or 127.0.0.1 so you can test locally.

Web config changes

Add the public and private keys you just created on the Google site to your web.config:

<add key="ReCaptchaPrivateKey" value="yourPrivateKey"/> 
<add key="ReCaptchaPublicKey" value="yourPublicKey"/>

HttpGet action in the controller

Add a property for yourPublicKey to your viewmodel and load it from the web.config in the get controller action, passing it into the view.

Changes in the head section

Add this script in the head section of your layout or view page:

<script src="https://www.google.com/recaptcha/api.js?render=@layoutModel.yourPublicKey"></script>

Changes on the .cshtml view page

There are two steps for the view page where you want to display the reCaptcha box:

Add this to the view where you want the reCaptcha box to display:

<div class="g-recaptcha" data-sitekey="@Model.YourPublicKey">

And add this script at the bottom of that view:

<script src='https://www.google.com/recaptcha/api.js'></script>

HttpPost action in the controller

You will need some code in the post action of your controller to hit the Google reCaptcha service and get a response if it appears this is a valid person – something like this:

var response = Request["g-recaptcha-response"];
var reCaptchaSecretKey = ConfigurationManager.AppSettings["yourPrivateKey"];
var webClient = new WebClient();
var resultFromGoogle = webClient.DownloadString(string.Format("https://www.google.com/recaptcha/api/siteverify?secret={0}&response={1}", reCaptchaSecretKey , response));
var parsedResponseFromGoogle = JObject.Parse(resultFromGoogle);
var thisIsARealPersonNotARobot = (bool)parsedResponseFromGoogle.SelectToken("success");

With that result in hand, you can decide how to handle a success or failure.

Gotchas:

I noticed that reCAPTCHA tried to send it’s request through TLS1.1 and our site will not allow it – we require TLS 1.2, so I had to add a directive to force it to only use 1.2 with this setting at the top of the HTTPPost controller action:

  ServicePointManager.SecurityProtocol = SecurityProtocolType.Tls12;

Thanks for reading and happy coding,
-Jim

Disable Html.DropDownListFor if only one item in dropdown


@Html.DropDownListFor(m => m.SelectedValue,
new SelectList(Model.CollectionOfItemsForDropdown, "ValueField", "NameField"),
@Model.CollectionOfItemsForDropdown.Count > 1
? (object)new { @class = "form-control", required = "true", }
: new { @class = "form-control", required = "true", disabled = "disabled" })

You have to use the conditional operator here for the anonymous objects because the dropdown list will be disabled if the word ‘disabled’ is rendered in the tag in any way.

NHibernate Session Manager needs HttpContext for NUnit testing

In a codebase I work in, NHibernate needs an HttpContext to get its current session – like this:


public static ISession GetCurrentSession()
{
var context = HttpContext.Current;
var currentSession = context.Items[CurrentSessionKey] as ISession;
...

To get my hands on a session for a NUnit test – I put this in the setup:


HttpContext.Current = new HttpContext(
new HttpRequest(null, "http://tempuri.org", null),
new HttpResponse(null));

 

Also, I wipe the context in the teardown:

<code>

[TearDown]
public void TearDown()
{
HttpContext.Current = null;

}

</code>

Credit for this goes to :

http://caioproiete.net/en/fake-mock-httpcontext-without-any-special-mocking-framework/

Downloading a file from MVC controller to .ascx using javascript ajax

Here is a simple way to download a file to a .net user control from a newer MVC controller.

The Controller code:

public FileResult DownloadSweetFile()
{
var downloadDirectory = _appSettings.PathToFile;
var filePathAndName = Path.Combine(downloadDirectory, "MySweetFile.pdf");

var cd = new System.Net.Mime.ContentDisposition
{
FileName = "MySweetFile.pdf",
Inline = false, //NOTE: This forces always prompting the user to download, not open file in the browser
};
Response.AppendHeader("Content-Disposition", cd.ToString());

return File(filePathAndName, "application/pdf");
}

I used a button on the page to fire the javascript

button type="button" id="btnSweetDownload" onclick="downloadSweetFile();" Click here to download a Sweet File!
(I removed the button tag so this will show correctly)

And here is the javascript to use in the .ascx file:

function downloadSweetFile() {
var url = '/MyControllerName/DownloadSweetFile';
window.location = url;
};