Do you long for the light? – Psalm 112:4 | 2-minute meditation (Bible verse)
Building wise friendships – James 4:4 | 2-minute meditation (Bible verse)
How to serve Jesus – Matthew 25:40 | 2-minute meditation (Bible verse)
No place for arrogance in evangelism – 1 Peter 3:15 | 2-minute meditation (Bible verse)
It’s okay to be heavenly minded! – Colossians 3:2 | 2-minute meditation (Bible verse)
Do you feel weak? – Isaiah 40:31 | 2-minute meditation (Bible verse)
How I fixed WSL 2 filesystem performance issues
In my development workflow (DevOps and scripting, mainly – I’m a security practitioner, not a programmer) I frequently switch between Windows and WSL. I work a lot with Ansible, and I love the fact that with WSL, I can enable full Ansible linting support in Visual Studio Code. The problem is that there are known cross-OS filesystem performance issues with WSL 2. At the moment, Microsoft recommends that if you require “Performance across OS file systems“, you should use WSL 1. Yuck.
What I want to do is to have a folder on my Windows hard drive, C:\Repos
, that contains all the repositories I use. I want that same folder to be available in WSL as the directory /repos
. Network file shares are out of the question, because, performance. (Have you tried git
operations on a CIFS share? Ugh.)
The old way – share from Windows to WSL
Until this week, I’d been sharing the Windows directory into WSL distros using /etc/fstab
and automount options. Fstab contained this line:
C:/Repos /repos drvfs uid=1000,gid=1000,umask=22 0 0
And /etc/wsl.conf
:
[automount]
options="metadata,umask=0033"
But with this setup, every so often WSL filesystem operations would grind to a halt and I’d need to terminate and restart the distro. It could be minutes or days before the problem resurfaced.
The Windows Subsystem for Linux September 2023 update, currently available only for Windows Insider builds, was supposed to fix some of the issues. I tried it. The fixes broke Docker and didn’t improve filesystem performance sufficiently. Even after a Docker upgrade (the Docker folks in collaboration with the WSL team), port mapping remained broken.
The new way – share from WSL to Windows
So let’s fix this once and for all. Maybe the WSL filesystem perfomance issues will go away one day, but I need to get on with my work today, not at some unspecified point in the future. I also don’t like running insider builds, and neither did my endpoint protection software. (That’s another story.) In the meantime, we need to move the files into WSL, where the performance issues disappear. It’s cross-OS access that causes the problems.
Now I know about \\wsl.localhost
, but unfortunately this confuses some of the programs I use day-to-day, including some VS Code plugins. I really need all Windows programs to believe and act like the files are on my hard drive.
After much pulling together of information from dark, dusty corners of the internet, I discovered that, with the latest versions of Windows, you can create symbolic links to the WSL filesystem. So the files move into WSL (as /repos
) and we create a symbolic link to that directory at C:\Repos
. This can be as simple as uttering the following PowerShell incantation:
New-Item -ItemType SymbolicLink -Path "C:\Repos" -Target "\\wsl.localhost\AlmaLinux9\repos"
This should be fairly self-explanatory. In my case, I’m actually mapping to /mnt/wsl/repos
, for reasons I’ll explain in the next section.
I have two VS Code workspaces – one for working directly in Windows, and the other for working in remote WSL mode. The Windows workspace points to C:\Repos
and the WSL workspace points to /repos
. When I restarted both workspaces after making these changes and moving the files into WSL, VS Code saw no difference. The files were still available, as before. But remote WSL operations now ran quicker.
Bonus: share folder with multiple distros
Ah, but what if you need the same folder to be available in more than one distribution? The same /repos
folder in AlmaLinux, Oracle Linux and Ubuntu? Not network mapped? Is that even possible?
Absolutely it is. It’s possible through the expedient of mounting an additional virtual hard disk, which becomes available to all distros. This freaks me out slightly, because – what about file system locking? Deadlocks? Race conditions? Okay, calm down Rob, just exercise the discipline of only opening files within one distro at a time. You got this.
Create yourself a new VHDX file. I store mine in roughly the same place WSL stores all its VHDX files:
$DiskSize = 10GB
$DiskName = "Repos.vhdx"
$VhdxDirectory = Join-Path -Path $Env:LOCALAPPDATA -ChildPath "wsl"
if (!(Test-Path -Path $VhdxDirectory)) {
New-Item -Path $VhdxDirectory -ItemType Directory
}
$DiskPath = Join-Path -Path $VhdxDirectory -ChildPath $DiskName
New-VHD -Path $DiskPath -SizeBytes $DiskSize -Dynamic
This gives you a raw, unformatted virtual hard drive at C:\Users\rob\AppData\Local\wsl\Repos.vhdx
. Mount it within WSL. From the PowerShell session used above:
wsl --mount --vhd $DiskPath --bare
Now inside one of your distros, you’ll have a new drive, ready to be formatted to ext4. Easy enough to work out which device is the new drive – sort by timestamp:
ls -1t /dev/sd* | head -n 1
You’ll get something like /dev/sdd
. Initialise this disk in WSL:
sudo mkfs -t ext4 /dev/sdd
(Do check you’re working with the correct drive – don’t just copy and paste!)
Back in the PowerShell session, we unmount the bare drive and remount it. WSL will automatically detect that the disk now contains a single ext4 partition and will mount that under /mnt/wsl
– in all distros, mind you.
wsl --unmount $DiskPath
wsl --mount --vhd $DiskPath --name repos
The drive will now be mounted at /mnt/wsl/repos
. If necessary, move any files into this location and create a new symlink at /repos
. In WSL:
shopt -s dotglob
sudo mv /repos/* /mnt/wsl/repos
sudo rmdir /repos
sudo ln -s /mnt/wsl/repos /repos
(shopt
here ensures the move includes any hidden files with names begining ‘.
‘.)
When sharing this directory into Windows, you need to use the full path /mnt/wsl/repos
, not the symlink /repos
. But otherwise it works the same as before.
This mount will not persist across reboots, so create a scheduled task to do this, that will run on log on.
Introduction to AI image generation (Stable Diffusion)
For saying that I work in technology, I feel embarrassingly late to this party. I was recently transfixed by posts on Mastodon that showed images generated by Midjourney. I’d never heard of Midjourney. This started me off down a rabbit hole.
A few metres down the rabbit hole, I read about InvokeAI, an open-source alternative to Midjourney. A few metres more and I discovered that I would be able to run InvokeAI on my PC, which is equipped with an NVIDIA GeForce RTX 3060 graphics card.
That’s how I found myself installing and running InvokeAI, despite still knowing virtually nothing about any facet of AI, let alone image generation. Faced with the InvokeAI web interface, the first question becomes “What do all these knobs and buttons do? What are “CFG Scale”, “Sampler” and “Steps”? What is the difference between the “Models”?
In case you find yourself in the same position, here’s a handy guide.
What is Midjourney?
Midjourney is an independent research lab that produces a proprietary artificial intelligence program under the same name that creates images from textual prompts. It is powered by AI and machine learning algorithms and uses text prompts and parameters to generate images. It can be used for both creative and practical applications, such as creating custom artwork and logos, or visualising data. Midjourney is constantly learning and improving, and can be accessed through the Discord chat app by messaging the Midjourney Bot.
What is Stable Diffusion?
Stable Diffusion is an image-generating model developed by Stability AI. It is powered by artificial intelligence and machine learning algorithms, and uses text prompts and parameters to generate images. It can be used for both creative and practical applications, such as creating custom artwork and logos, or visualising data. Stable Diffusion is open-source, meaning anyone can access and use the model without cost. The model has a relatively good understanding of contemporary artistic styles, making it a popular choice for creative applications.
Stable Diffusion is based on the concept of a CFG Scale (see below), which is a mathematical representation of the complexity of a system, which can be used to analyze the behavior of the system over time. The CFG Scale is used to measure the stability of a system, which is an important factor in understanding how the system evolves over time.
The Stable Diffusion algorithm uses a “sampler” (see below) to collect data from the system at different points in time. The algorithm then uses a model to analyse the data collected from the sampler.
What’s InvokeAI then?
InvokeAI is an implementation of Stable Diffusion. It is powered by artificial intelligence and machine learning algorithms, and uses text prompts and parameters to generate images. It is optimised for efficiency, and can generate images with relatively low VRAM requirements. It supports the use of custom models.
The InvokeAI web interface offers lots of parameters. The rest of this article is dedicated to explaining those parameters.
CFG Scale
The CFG Scale, or Classifier-Free Guidance Scale, is a parameter in the Stable Diffusion image generation model. It controls how much the image generation process is guided by the initial input, and how much it is random. (It determines the amount of noise in the generated image.) A high CFG Scale value produces output more closely resembling the text prompt.
A low CFG Scale value in Stable Diffusion can result in the output image having a lower fidelity and quality. It can for example lead to the model generating extra arms and legs! But it also increases the diversity and creativity of the result.
Increasing the CFG Scale value can result in higher quality, as well as a more dynamic colour range. It can also lead to more detailed images with a higher resolution. That said, high CFG values can lead to unrealistic-looking images.
The best CFG Scale value range for Stable Diffusion is generally between 7 and 11. Higher values of 50 to 100 are recommended for good outpainting results.
The Steps parameter
The Steps parameter of Stable Diffusion defines the number of inference steps used in the model. It dictates the amount of detail that is produced when generating images from text descriptions.
Generally, increasing the number of steps will produce higher quality images, but this will come at the expense of slower inference. The optimal number of steps will depend on the dataset and task at hand. In most cases, images will converge on 30-50 steps and will not change significantly with higher steps.
The Sampler parameter
InvokeAI provides several different samplers that can be used to generate samples from a given data distribution. The k_euler
sampler uses the Euler-Maruyama algorithm to generate samples from a given distribution. The k_dmp_2
sampler uses a second-order differential equation to generate samples. Each of the samplers has its own advantages and disadvantages, and experimentation will be rewarded.
The sampler is used to select regions of an image and generate a diffusion map that conditions the output of the model. The diffusion map is then used to generate a detailed image conditioned on text descriptions. The sampler also allows for a more efficient and stable training process, as it reduces the number of steps needed to generate a result. Additionally, the sampler can upscale samples from the diffusion model, allowing for better results when generating smaller images.
Prompts
Prompts in Stable Diffusion are phrases or words used to generate AI-generated images. They are used to provide the AI model with guidance on what type of image to create. Prompts can range from abstract concepts to specific objects or scenes. They can be weighted to emphasise certain keywords.
Example prompts include “a picture of a black cat on a kitchen top”, “Paintings of Landscapes”, “Style cue: Steampunk / Clockpunk”, “a beautiful sunset over a beach” or “a futuristic cityscape.” Using very specific prompts, helps the AI to generate more accurate and detailed images.
Different models*
Confusingly, you can use different models with InvokeIA. The most commonly used is the Stable Diffusion Model. InvokeAI can use a range of other models for image processing and manipulation tasks. InvokeAI also provides tools for creating custom models, allowing users to create their own models that can be used in combination with the existing models.
InvokeAI supports a variety of models, from classic Machine Learning algorithms such as Random Forest and Logistic Regression to deep learning models such as Convolutional Neural Networks (CNNs) and Recurrent Neural Networks (RNNs). The main differences between these models are the type of data that they process and the types of result they produce. For example, Random Forest and Logistic Regression are good at predicting classifications, while CNNs and RNNs are better at recognizing patterns in images and text. Additionally, CNNs and RNNs can be used for more complex tasks such as image recognition, text generation, and language translation.
Sneaky disclaimer
So here’s my disclaimer. I’ve pulled a slightly sneaky trick with this blog post. I’ve recruited AI to explain AI. That seemed logical. I used AI text generation services YouChat and Perplexity to explain Stable Diffusion, using prompts like “Explain the CFG Scale in Stable Diffusion.”
That feels like cheating. But I did at least sanity-check the results and edit them before posting here. Some of the answers were in fact wrong. And it was no quicker writing the blog post this way. But roughly 80% of the text in this blog post was AI-generated. How does that make you feel? Tell me in the comments!
*The AI-based text-generators really struggled to explain the difference between the various models that work with InvokeAI. My suggestion is to install some and experiment!
Querying GitHub Projects V2 with GraphQL in Laravel
Note: I previously wrote about using plain PHP to query GitHub Projects V2. In this post I offer some tips for querying using Laravel.
GitHub’s new Projects are not accessible via the older REST API. Working with them programmatically involves learning some GraphQL, which can be a headache, the first time you encounter it. Here’s my approach, using Laravel.
Authentication
Get a GitHub personal access token, limiting its permissions as appropriate to your app. Then add to your .env
:
GH_TOKEN=[your token]
HTTP requests
We can standardise GitHub GraphQL queries by using an HTTP facade macro. In app/Providers/AppServiceProvider.php
, add to the boot()
method:
/**
* Make request to GitHub using the GraphQL API.
*
* $query - a GraphQL query string
* $variables - an array of GraphQL variables
*
* Call like: $data = Http::githubGraphQL($query, $variables)->throw()->json()['data'];
*/
Http::macro('githubGraphQL', function (string $query, array $variables) {
return Http::withHeaders([
'Accept' => 'application/vnd.github+json',
'Authorization' => 'Bearer ' . env('GH_TOKEN')
])->post('https://api.github.com/graphql', [
'query' => $query,
'variables' => $variables,
]);
});
GitHubProjects model
This sample app/Models/GitHubProjects.php
shows you the sort of query you can run:
<?php
namespace App\Models;
use Illuminate\Database\Eloquent\Factories\HasFactory;
use Illuminate\Database\Eloquent\Model;
use Illuminate\Support\Facades\Http;
class GitHubProjects extends Model
{
use HasFactory;
/**
* getFirstN - get first N projects for the GitHub organisation, ordered by
* title
*
* @param integer $count - number of projects to return
* @return array
*/
public static function getFirstN(int $count = 20): array
{
$projects = Http::githubGraphQL(
<<<EOD
query getFirstN(\$count: Int) {
organization(login: "MyOrg") {
projectsV2(first: \$count, orderBy: {field: TITLE, direction: ASC}) {
nodes {
number
id
title
}
}
}
}
EOD,
[ "count" => $count ]
)->throw()->json()['data'];
return $projects['organization']['projectsV2']['nodes'];
}
/**
* getFirstNIssues
*
* @param integer $projectNumber
* @param integer $count
* @return array
*/
public static function getFirstNIssues(int $projectNumber = 1, int $count = 20): array
{
$projectId = GitHubProjects::getIdByNumber($projectNumber);
$issues = Http::githubGraphQL(
<<<EOD
# Exclamation mark since projectId is required
query getFirstNIssues(\$projectId: ID!, \$count: Int) {
node(id: \$projectId) {
... on ProjectV2 {
items(first: \$count) {
nodes {
id
fieldValues(first: 8) {
nodes {
... on ProjectV2ItemFieldTextValue {
text
field {
... on ProjectV2FieldCommon {
name
}
}
}
... on ProjectV2ItemFieldDateValue {
date
field {
... on ProjectV2FieldCommon {
name
}
}
}
... on ProjectV2ItemFieldSingleSelectValue {
name
field {
... on ProjectV2FieldCommon {
name
}
}
}
}
}
content {
... on DraftIssue {
title
body
}
... on Issue {
title
assignees(first: 10) {
nodes {
login
}
}
}
... on PullRequest {
title
assignees(first: 10) {
nodes {
login
}
}
}
}
}
}
}
}
}
EOD,
[
"projectId" => $projectId,
"count" => $count
]
)->throw()->json()['data'];
return $issues['node']['items']['nodes'];
}
/**
* getByTitle - search for projects by title
*
* @param string $title - search for
* @return array
*/
public static function getByTitle(string $title): array
{
$projects = Http::githubGraphQL(
<<<EOD
query getByName(\$title: String!) {
organization(login: "MyOrg") {
projectsV2(
first: 20
orderBy: { field: TITLE, direction: ASC }
query: \$title
) {
nodes {
number
id
title
}
}
}
}
EOD,
[ "title" => $title ]
)->throw()->json()['data'];
return $projects['organization']['projectsV2']['nodes'];
}
/**
* getIdByNumber
*
* @param integer $number - the integer identifier of the project
* @return string - the internal GitHub project ID (e.g. PVT_kwDOBWQiz84AH53W)
*/
private static function getIdByNumber(int $number = 1)
{
$project = Http::githubGraphQL(
<<<EOD
# Exclamation mark since number is required
query getIdByNumber(\$number: Int!) {
organization(login: "MyOrg") {
projectV2(number: \$number) {
id
}
}
}
EOD,
['number' => $number]
)->throw()->json()['data'];
return $project['organization']['projectV2']['id'];
}
/**
* getProjectFields - get all the (custom) fields associated to a project
*
* @param string $projectID - GitHub's reference for the project
* @return array
*/
public static function getProjectFields(string $projectID): array
{
$fields = Http::githubGraphQL(
<<<EOD
query getProjectFields(\$node: ID!) {
node(id: \$node) {
... on ProjectV2 {
fields(first: 20) {
nodes {
... on ProjectV2Field {
id
name
}
... on ProjectV2IterationField {
id
name
configuration {
iterations {
startDate
id
}
}
}
... on ProjectV2SingleSelectField {
id
name
options {
id
name
}
}
}
}
}
}
}
EOD,
[ "node" => $projectID ]
)->throw()->json()['data'];
return $fields['node']['fields']['nodes'];
}
}
Controller
This is a very basic sample controller method to get you started. In practice you’ll use views:
public function index()
{
/*
Search for "MyProject" in projects
Project number is $projects[0]['number'];
Project name is $projects[0]['title'];
Project ID is $projects[0]['id'];
*/
define('BR', "<br />\n");
$projects = GitHubProjects::getByTitle('MyProject');
if (isset($projects[0])) {
$projectNumber = $projects[0]['number'];
$issues = GitHubProjects::getFirstNIssues($projectNumber, 50);
echo "<h1>First 50 issues in MyProject project</h1>\n";
foreach ($issues as $issue) {
echo '<b>ID:</b> ' . $issue['id'] . BR;
echo '<b>Title:</b> ' . $issue['content']['title'] . BR;
if (isset($issue['content']['assignees']['nodes'][0])) {
echo '<b>Assignee:</b> ' . $issue['content']['assignees']['nodes'][0]['login'] . BR;
}
// Field types
foreach ($issue['fieldValues']['nodes'] as $field) {
if (isset($field['text']) && $field['field']['name'] != "Title") {
echo '<b>' . $field['field']['name'] . ':</b> ' . $field['text'] . BR;
}
if (isset($field['date'])) {
echo '<b>' . $field['field']['name'] . ':</b> ' . $field['date'] . BR;
}
if (isset($field['name'])) {
echo '<b>' . $field['field']['name'] . ':</b> ' . $field['name'] . BR;
}
}
echo BR;
}
} else {
echo "No projects found";
}
}
Useful resources
The following are invaluable for working with GitHub’s GraphQL API:
- Official reference
- GtiHub GraphQL Explorer (run live queries on your account)
- GraphQL Formatter (you can also use the “Prettify” button in the GraphQL Explorer