How I fixed WSL 2 filesystem performance issues

Files flying at speed from a computer

In my development workflow (DevOps and scripting, mainly – I’m a security practitioner, not a programmer) I frequently switch between Windows and WSL. I work a lot with Ansible, and I love the fact that with WSL, I can enable full Ansible linting support in Visual Studio Code. The problem is that there are known cross-OS filesystem performance issues with WSL 2. At the moment, Microsoft recommends that if you require “Performance across OS file systems“, you should use WSL 1. Yuck.

What I want to do is to have a folder on my Windows hard drive, C:\Repos, that contains all the repositories I use. I want that same folder to be available in WSL as the directory /repos. Network file shares are out of the question, because, performance. (Have you tried git operations on a CIFS share? Ugh.)

The old way – share from Windows to WSL

Until this week, I’d been sharing the Windows directory into WSL distros using /etc/fstab and automount options. Fstab contained this line:

C:/Repos /repos drvfs uid=1000,gid=1000,umask=22 0 0

And /etc/wsl.conf:

[automount]
options="metadata,umask=0033"

But with this setup, every so often WSL filesystem operations would grind to a halt and I’d need to terminate and restart the distro. It could be minutes or days before the problem resurfaced.

The Windows Subsystem for Linux September 2023 update, currently available only for Windows Insider builds, was supposed to fix some of the issues. I tried it. The fixes broke Docker and didn’t improve filesystem performance sufficiently. Even after a Docker upgrade (the Docker folks in collaboration with the WSL team), port mapping remained broken.

The new way – share from WSL to Windows

So let’s fix this once and for all. Maybe the WSL filesystem perfomance issues will go away one day, but I need to get on with my work today, not at some unspecified point in the future. I also don’t like running insider builds, and neither did my endpoint protection software. (That’s another story.) In the meantime, we need to move the files into WSL, where the performance issues disappear. It’s cross-OS access that causes the problems.

Now I know about \\wsl.localhost, but unfortunately this confuses some of the programs I use day-to-day, including some VS Code plugins. I really need all Windows programs to believe and act like the files are on my hard drive.

After much pulling together of information from dark, dusty corners of the internet, I discovered that, with the latest versions of Windows, you can create symbolic links to the WSL filesystem. So the files move into WSL (as /repos) and we create a symbolic link to that directory at C:\Repos. This can be as simple as uttering the following PowerShell incantation:

New-Item -ItemType SymbolicLink -Path "C:\Repos" -Target "\\wsl.localhost\AlmaLinux9\repos"

This should be fairly self-explanatory. In my case, I’m actually mapping to /mnt/wsl/repos, for reasons I’ll explain in the next section.

I have two VS Code workspaces – one for working directly in Windows, and the other for working in remote WSL mode. The Windows workspace points to C:\Repos and the WSL workspace points to /repos. When I restarted both workspaces after making these changes and moving the files into WSL, VS Code saw no difference. The files were still available, as before. But remote WSL operations now ran quicker.

Bonus: share folder with multiple distros

Ah, but what if you need the same folder to be available in more than one distribution? The same /repos folder in AlmaLinux, Oracle Linux and Ubuntu? Not network mapped? Is that even possible?

Absolutely it is. It’s possible through the expedient of mounting an additional virtual hard disk, which becomes available to all distros. This freaks me out slightly, because – what about file system locking? Deadlocks? Race conditions? Okay, calm down Rob, just exercise the discipline of only opening files within one distro at a time. You got this.

Create yourself a new VHDX file. I store mine in roughly the same place WSL stores all its VHDX files:

$DiskSize = 10GB
$DiskName = "Repos.vhdx"
$VhdxDirectory = Join-Path -Path $Env:LOCALAPPDATA -ChildPath "wsl"
if (!(Test-Path -Path $VhdxDirectory)) {
    New-Item -Path $VhdxDirectory -ItemType Directory
}
$DiskPath = Join-Path -Path $VhdxDirectory -ChildPath $DiskName
New-VHD -Path $DiskPath -SizeBytes $DiskSize -Dynamic

This gives you a raw, unformatted virtual hard drive at C:\Users\rob\AppData\Local\wsl\Repos.vhdx. Mount it within WSL. From the PowerShell session used above:

wsl --mount --vhd $DiskPath --bare

Now inside one of your distros, you’ll have a new drive, ready to be formatted to ext4. Easy enough to work out which device is the new drive – sort by timestamp:

ls -1t /dev/sd* | head -n 1

You’ll get something like /dev/sdd. Initialise this disk in WSL:

sudo mkfs -t ext4 /dev/sdd

(Do check you’re working with the correct drive – don’t just copy and paste!)

Back in the PowerShell session, we unmount the bare drive and remount it. WSL will automatically detect that the disk now contains a single ext4 partition and will mount that under /mnt/wsl – in all distros, mind you.

wsl --unmount $DiskPath
wsl --mount --vhd $DiskPath --name repos

The drive will now be mounted at /mnt/wsl/repos. If necessary, move any files into this location and create a new symlink at /repos. In WSL:

shopt -s dotglob
sudo mv /repos/* /mnt/wsl/repos
sudo rmdir /repos
sudo ln -s /mnt/wsl/repos /repos

(shopt here ensures the move includes any hidden files with names begining ‘.‘.)

When sharing this directory into Windows, you need to use the full path /mnt/wsl/repos, not the symlink /repos. But otherwise it works the same as before.

This mount will not persist across reboots, so create a scheduled task to do this, that will run on log on.

Excel – correctly sort IP addresses

This post is probably for pedants only, who care passionately about correctly sorting IP addresses in an Excel spreadsheet. This approach uses pure functions – no VBA. I prefer it to some other approaches because, frankly, they sail right over my head.

Let’s start with a column of IP addresses – like this one:

Excel tables are lovely, for working with data like this. If you convert your data to a table, you get to use named column references, which we’ll see in a moment. Go to Insert > Table and you get something like this:

You can’t sort this column meaningfully, as-is. We need an additional column, which we’ll use to transform the contents of the IP column.

And then in any of the rows in that column, we enter this formula:

=
IF(0,"##### FIRST OCTET #####","") &
TEXT(
  LEFT(
    [@IP],
    FIND(
      CHAR(134),
      SUBSTITUTE(
        [@IP],
        ".",
        CHAR(134),
        1
      )
    ) - 1
  ),
  "000"
)
& "." &
IF(0,"##### SECOND OCTET #####","") &
TEXT(
  MID(
    [@IP],
    FIND(
      CHAR(134),
      SUBSTITUTE(
        [@IP],
        ".",
        CHAR(134),
        1
      )
    ) + 1,
    FIND(
      CHAR(134),
      SUBSTITUTE(
        [@IP],
        ".",
        CHAR(134),
        2
      )
    )
    -
    FIND(
      CHAR(134),
      SUBSTITUTE(
        [@IP],
        ".",
        CHAR(134),
        1
      )
    )
  ),
  "000"
)
& "." &
IF(0,"##### THIRD OCTET #####","") &
TEXT(
  MID(
    [@IP],
    FIND(
      CHAR(134),
      SUBSTITUTE(
        [@IP],
        ".",
        CHAR(134),
        2
      )
    ) + 1,
    FIND(
      CHAR(134),
      SUBSTITUTE(
        [@IP],
        ".",
        CHAR(134),
        3
      )
    )
    -
    FIND(
      CHAR(134),
      SUBSTITUTE(
        [@IP],
        ".",
        CHAR(134),
        2
      )
    )
  ),
  "000"
)
& "." &
IF(0,"##### FOURTH OCTET #####","") &
TEXT(
  MID(
    [@IP],
    FIND(
      CHAR(134),
      SUBSTITUTE(
        [@IP],
        ".",
        CHAR(134),
        3
      )
    ) + 1,
    
    IF(
      ISERROR(
        FIND("/",[@IP])
      ),
      LEN([@IP]),
      FIND("/",[@IP]) - 1
    )    
    -
    FIND(
      CHAR(134),
      SUBSTITUTE(
        [@IP],
        ".",
        CHAR(134),
        3
      )
    )
  ),
  "000"
)
&
IF(0,"##### CIDR #####","") &
IF(
  ISERROR(FIND("/",[@IP])),
  "",
  RIGHT(
    [@IP],
    LEN([@IP]) - FIND("/",[@IP]) + 1
  )
)

You end up with this, on which you can now perform an alphabetical (A-Z) sort:

If you like, you can hide that column, so you don’t need to look at its hideousness. Then whenever you need to resort, go to Data > Sort.

Some things to mention about this formula:

  • [@IP] is the named column reference I referred to previously.
  • I edited this formula in a code editor (Notepad++), so I could nicely indent and keep track of opened and closed parenthesis. This makes life much easier, when writing long formulae! There’s one gotcha – Notepad++ by default uses tabs rather than spaces, which breaks Excel. Make sure there are no tab characters in your indentation.
  • The IF(0,"##### THIRD OCTET #####","") stuff is a hack, which allows you to insert a comment into a text-based formula. The 0 evaluates to FALSE, so it returns the function’s third parameter – an empty string. The second parameter is where I place my comment. Handy!
  • Excel doesn’t have a function to find the position of the nth occurrence of a string. So there’s a nifty two-step hack for this, which is not my original idea. First, we use the SUBSTITUTE() function, which can substitute a character for the nth occurrence of some text. We search for the nth occurrence of the full stop (“.”) and replace it with CHAR(134) – the dagger symbol (†). Then we find the position of that CHAR(134), to feed into the LEFT()/MID()/RIGHT() functions.
  • The formula handles CIDR notation.

Script to clone a VM with free VMware ESXi

clones

Many people run free versions of ESXi, particularly in lab environments. Unfortunately with the free version of ESXi, the VMware API is read-only. This limits (or complicates) automation.

I was looking for a way to clone guest VMs with the minimum of effort. This script, which took inspiration from many sources on the internet, is the result. It takes advantage of the fact that although the API is limited, there are plenty of actions you can take via SSH, including calls to vim-cmd.

<#

.SYNOPSIS
    Clones a VM,

.DESCRIPTION
    This script:

    - Retrieves a list of VMs attached to the host
    - Enables the user to choose which VM to clone
    - Clones the VM

    It must be run on a Windows machine that can connect to the virtual host.

    This depends on the Posh-SSH and PowerCLI modules, so from an elevated
    PowerShell prompt, run:

        Install-Module PoSH-SSH
        Install-Module VMware.PowerCLI

    For free ESXi, the VMware API is read-only. That limits what we can do with
    PowerCLI. Instead, we run certain commands through SSH. You will therefore
	need to enable SSH on the ESXi host before running this script.
	
	The script only handles simple hosts with datastores under /vmfs. And it
	clones to the same datastore as the donor VM. Your setup and requirements
	may be more complex. Adjust the script to suit.

.EXAMPLE
    From a PowerShell prompt:

      .\New-GuestClone.ps1 -ESXiHost 192.168.101.100

.COMPONENT
    VMware scripts

.NOTES
    This release:

        Version: 1.0
        Date:    8 July 2021
        Author:  Rob Pomeroy

    Version history:

        1.0 - 8 July 2021 - first release

#>
param
(
    [Parameter(Mandatory = $true, Position = 0)][String]$ESXiHost
)

####################
## INITIALISATION ##
####################

# Load necessary modules
Write-Host Loading PowerShell modules...
Import-Module PoSH-SSH
Import-Module VMware.PowerCLI

# Change to the directory where this script is running
Push-Location -Path ([System.IO.Path]::GetDirectoryName($PSCommandPath))


#################
## CREDENTIALS ##
#################

# Check for the creds directory; create it if it doesn't exist
If(-not (Test-Path -Path '.\creds' -PathType Container)) {
    New-Item -Path '.\creds' -ItemType Directory | Out-Null
}

# Looks for credentials file for the VMware host. Passwords are stored encrypted
# and will only work for the user and machine on which they're stored.
$credsFile = ('.\creds\' + $ESXiHost + '.creds')
If(-not (Test-Path -Path $credsFile)) {
    # Request credentials
    $creds = Get-Credential -Message "Enter root password for VMware host $ESXiHost" -User root
    $creds.Password | ConvertFrom-SecureString | Set-Content $credsFile
}
$ESXICredential = New-Object System.Management.Automation.PSCredential( `
    "root", `
    (Get-Content $credsFile | ConvertTo-SecureString)
)


#########################
## List VMs (PowerCLI) ##
#########################
#
# Disable HTTPS certificate check (not strictly needed if you use -Force) in
# later calls.
Set-PowerCLIConfiguration -InvalidCertificateAction Ignore -Confirm:$false | Out-Null

# Connect to the ESXi server
Connect-VIServer -Server $ESXiHost -Protocol https -Credential $ESXICredential -Force | Out-Null
If(-not $?) {
    Throw "Connection to ESXi failed. If password issue, delete $credsFile and try again."
}

# Get all VMs, sorted by name
$guests = (Get-VM -Server $ESXiHost | Sort-Object)

# Work out how much we need to left-pad the array index, when outputting
$padWidth = ([string]($guests.Count - 1)).Length

# Output the list of VMs, with array index padded so it lines up nicely
Write-Host ("Existing VMs (" + $guests.Count + "), sorted by name:")
for ( $i = 0; $i -lt $guests.count; $i++)
{
    If($guests[$i].PowerState -eq "PoweredOn") {
        Write-Host -ForegroundColor Red ("[" + "$i".PadLeft($padWidth, ' ') + "](ON) : " + $guests[$i].Name) 
    } Else {
        Write-Host ("[" + "$i".PadLeft($padWidth, ' ') + "](off): " + $guests[$i].Name) 
    }
}
Write-Host


##########################
## Choose a VM to clone ##
##########################

$chosenVM = 0
do {
    $inputValid = [int]::TryParse((Read-Host 'Enter the [number] of the VM to clone (the donor)'), [ref]$chosenVM)
    if($chosenVM -lt 0 -or $chosenVM -ge $guests.Count) {
        $inputValid = $false
    }
    if (-not $inputValid) {
        Write-Host ("Must be a number in the range 0 to " + ($guests.Count - 1).ToString() + ". Try again.")
    }
} while (-not $inputValid)

# Check the VM is powered off
if($guests[$chosenVM].PowerState -ne "PoweredOff") {
    Throw "ERROR: VM must be powered off before cloning"
}

# Get VM's datastore, directory and VMX; we assume this is at /vmfs/volumes
If(-not ($guests[$chosenVM].ExtensionData.Config.Files.VmPathName -match '\[(.*)\] ([^\/]*)\/(.*)')) {
    Throw "ERROR: Could not calculate the datastore"
}
$VMdatastore = $Matches[1]
$VMdirectory = $Matches[2]
$VMXlocation = ("/vmfs/volumes/" + $VMdatastore + "/" + $VMdirectory + "/" + $Matches[3])
$VMdisks     = $guests[$chosenVM] | Get-HardDisk


###############################
## File test (PoSH-SSH SFTP) ##
###############################

# Clear any open SFTP sessions
Get-SFTPSession | Remove-SFTPSession | Out-Null

# Start a new SFTP session
(New-SFTPSession -Computername $ESXiHost -Credential $ESXICredential -Acceptkey -Force -WarningAction SilentlyContinue) | Out-Null

# Test that we can locate the VMX file
If(-not (Test-SFTPPath -SessionId 0 -Path $VMXlocation)) {
    Throw "ERROR: Cannot find donor VM's VMX file"
}


#################
## New VM name ##
#################

$validInput = $false
While(-not $validInput) {
    $newVMname = Read-Host "Enter the name of the new VM"
    $newVMdirectory = ("/vmfs/volumes/" + $VMdatastore + "/" + $newVMname)

    # Check if the directory already exists
    If(Test-SFTPPath -SessionId 0 -Path $newVMdirectory) {
        $ynTest = $false
        While(-not $ynTest) {
            $yn = (Read-Host "A directory already exists with that name. Continue? [Y/N]").ToUpper()
            if (($yn -ne 'Y') -and ($yn -ne 'N')) {
                Write-Host "ERROR: enter Y or N"
            } else {
                $ynTest = $true
            }
        }
        if($yn -eq 'Y') {
            $validInput = $true
        } else {
            Write-Host "You will need to choose a different VM name."
        }
    } else {
        If($newVMdirectory.Length -lt 1) {
            Write-Host "ERROR: enter a name"
        } else {
            $validInput = $true

            # Create the directory
            New-SFTPItem -SessionId 0 -Path $newVMdirectory -ItemType Directory | Out-Null
        }
    }
}


###################################
## Copy & transform the VMX file ##
###################################

# Clear all previous SSH sessions
Get-SSHSession | Remove-SSHSession | Out-Null

# Connect via SSH to the VMware host
(New-SSHSession -Computername $ESXiHost -Credential $ESXICredential -Acceptkey -Force -WarningAction SilentlyContinue) | Out-Null

# Replace VM name in new VMX file
Write-Host "Cloning the VMX file..."
$newVMXlocation = $newVMdirectory + '/' + $newVMname + '.vmx'
$command = ('sed -e "s/' + $VMdirectory + '/' + $newVMname + '/g" "' + $VMXlocation + '" > "' + $newVMXlocation + '"')
($commandResult = Invoke-SSHCommand -Index 0 -Command $command) | Out-Null

# Set the display name correctly (might be wrong if donor VM name didn't match directory name)
$find    = 'displayName \= ".*"'
$replace = 'displayName = "' + $newVMname + '"'
$command = ("sed -i 's/$find/$replace/' '$newVMXlocation'")
($commandResult = Invoke-SSHCommand -Index 0 -Command $command) | Out-Null

# Blank the MAC address for adapter 1
$find    = 'ethernet0.generatedAddress \= ".*"'
$replace = 'ethernet0.generatedAddress = ""'
$command = ("sed -i 's/$find/$replace/' '$newVMXlocation'")
($commandResult = Invoke-SSHCommand -Index 0 -Command $command) | Out-Null


#####################
## Clone the VMDKs ##
#####################

Write-Host "Please be patient while cloning disks. This can take some time!"
foreach($VMdisk in $VMdisks) {
    # Extract the filename
    $VMdisk.Filename -match "([^/]*\.vmdk)" | Out-Null
    $oldDisk = ("/vmfs/volumes/" + $VMdatastore + "/" + $VMdirectory + "/" + $Matches[1])
    $newDisk = ($newVMdirectory + "/" + ($Matches[1] -replace $VMdirectory, $newVMname))

    # Clone the disk
    $command = ('/bin/vmkfstools -i "' + $oldDisk + '" -d thin "' + $newDisk + '"')
    Write-Host "Cloning disk $oldDisk to $newDisk with command:"
    Write-Host $command
    # Set a timeout of 10 minutes/600 seconds for the disk to clone
    ($commandResult = Invoke-SSHCommand -Index 0 -Command $command -TimeOut 600) | Out-Null
    #Write-Host $commandResult.Output
   
}


########################
## Register the clone ##
########################

Write-Host "Registering the clone..."
$command = ('vim-cmd solo/register "' + $newVMXlocation + '"')
($commandResult = Invoke-SSHCommand -Index 0 -Command $command) | Out-Null
#Write-Host $commandResult.Output


##########
## TIDY ##
##########

# Close all connections to the ESXi host
Disconnect-VIServer -Server $ESXiHost -Force -Confirm:$false
Get-SSHSession | Remove-SSHSession | Out-Null
Get-SFTPSession | Remove-SFTPSession | Out-Null

# Return to previous directory
Pop-Location

You can download the latest version of the script from my GitHub repository.

Cover photo by Dynamic Wang on Unsplash

How-to: ODBC connection to DB2 instance (e.g. Mitel CSM)

ibm-db2I’m sure this is a very niche article. Which means if you’ve arrived here, you’ve almost certainly been as frustrated as I have with the documentation for DB2 ODBC connections.

Background: I’m trying to connect to a DB2 instance, running on a Windows machine. I imagine that this procedure will work just as well for instances running on other architectures. And I’m trying to connect from another Windows machine, to pass data into a Microsoft SQL-powered data warehouse.

You will need the “IBM Data Server Driver for ODBC and CLI (Windows/x86-32 32 bit) V10.5 Fix Pack 8“. If the link doesn’t work any more, go to IBM Fix Central and search for “Windows Data Server Driver ODBC 10.5”. Possibly other versions will work, but this is the one I found most reliable.

The process is as follows:

  1. Copy the entire extracted folder to the root of a data drive (e.g. to D:\DB2, E:\DB2 as the case may be).
  2. Add the bin folder to the computer’s PATH environment variable (DB2\clidriver\bin).
    db2_odbc_01
  3. Launch an elevated command prompt.
  4. Navigate to the DB2 bin folder. E.g.:
    e:
    cd e:\DB2\clidriver\bin
  5. Install the ODBC driver:
    db2oreg1.exe -i
  6. On Windows Server 2012 R2, also run:
    db2oreg1 -setup
  7. The driver will now appear in the 32-bit ODBC driver list:
    db2_odbc_02

To connect:

  1. Launch the 32-bit ODBC data source administration applet.
  2. On the User DSN or System DSN tab, click Add.
    db2_odbc_03
  3. Select the ODBC driver and click Finish.
    db2_odbc_04
  4. Name the data source (e.g. “CSM”, in my case) and then click “Add”, next to the Database alias dropdown.
    db2_odbc_05
  5. Enter User ID and password.
    db2_odbc_06
  6. Check the “Save password” box. Note the warning and click OK.
    db2_odbc_07
  7. Switch to the Advanced Settings tab.
    db2_odbc_08
  8. Use the “Add” button, to enter the following values.
    Hostname: [host DNS name or IP address]
    Port:     50000
    Protocol: TCP/IP
    Database: [DB name, e.g. CTI_DATA]
  9. Review the settings and click OK:
    db2_odbc_09
  10. To test the connection, first click the “Configure” button.
    db2_odbc_12
  11. The credentials are stored in the ini file, so you do not need to enter them here. Simply click “Connect”.
    db2_odbc_10
  12. You should see a success message.
    db2_odbc_11

If you’re looking for a free, Windows-based ODBC interrogation program, there are a few out there. All the ones I tried had quirks in their interfaces. I’ve had most success with ODBC query tool though. Here it is, running under Windows 10:

odbc-query-tool

If that doesn’t work for you, you can try the almost identically named ODBC QueryTool.

CodeIgniter 3: connecting to MS SQL from Linux

ms-sql1-300x120Connecting to Windows/Microsoft SQL from Linux/CodeIgniter remains challenging. As PHP progresses, various old methods of connecting to MS SQL are being deprecated in favour of (e.g.) PDO. Unfortunately, reliable MS SQL server PDO drivers are hard to come by under Linux.

As I’ve written previously, the most successful method I’ve found of connecting from CodeIgniter to MS SQL is using a combination of unixODBC and FreeTDS. So here’s an updated guide for CodeIgniter 3 on Ubuntu 14/PHP 5 or Ubuntu 16/PHP 7.

On the server where your web application runs, install the following packages: unixodbc freetds freetds-dev tdsodbc php5-odbc. For Ubuntu 14:

apt-get install unixodbc freetds freetds-dev tdsodbc php5-odbc

For Ubuntu 16:

apt-get install unixodbc freetds-common freetds-dev tdsodbc php7.0-odbc

Restart Apache:

service apache2 restart

Add the details of your MS SQL server to the FreeTDS config file (at /etc/freetds/freetds.conf), e.g.:

[my-server]
host = my-server.domain.local
port = 1433
tds version = 7.4

Note: that the TDS version shown above is for SQL Server 2012 (version 11). For more information about the TDS protocol version numbers (which don’t follow the Microsoft SQL version numbers), read the official documentation.

Add to /etc/odbcinst.ini (you may need to check the precise location of these .so files):

[TDS]
Driver = /usr/lib/x86_64-linux-gnu/odbc/libtdsodbc.so
Description = FreeTDS driver
Setup = /usr/lib/x86_64-linux-gnu/odbc/libtdsS.so

Add details of your MS SQL server to /etc/odbc.ini:

[my-server]
Driver = TDS
Description = My Server
ServerName = my-server
Database = MyDatabase

The ServerName above here corresponds to the name of your server in the FreeTDS configuration file. In the CodeIgniter database configuration file, add something like this:

$db['mssql'] = array(
'dsn' => '',
'hostname' => 'dsn=my-server;uid=myusername;pwd=mypassword',
'username' => '',
'password' => '',
'database' => 'MyDatabase',
'database' => '',
'dbdriver' => 'odbc',
'dbprefix' => '',
'pconnect' => FALSE,
'db_debug' => (ENVIRONMENT !== 'production'),
'cache_on' => FALSE,
'cachedir' => '',
'char_set' => 'utf8',
'dbcollat' => 'utf8_general_ci',
'swap_pre' => '',
'encrypt' => FALSE,
'compress' => FALSE,
'stricton' => FALSE,
'failover' => array(),
'save_queries' => TRUE
);

Then your models should begin something like this:

class WidgetModel extends CI_Model
{
public function __construct()
{
parent::__construct();
// Load MS SQL connection
$this -> widgetdb = $this->load->database('mssql', true);
}

You can get some strange results using this driver. Mainly you’ll have to resort to explicit SQL queries. And certain things won’t work as expected – e.g. using “AS” to rename columns only works on calculated columns.

How-to: Laravel 5.1 tutorial; part 1 – installation

It’s been quite a while since we’ve posted anything about Laravel. We’re strictly hobbyist developers here and in web development it’s almost impossible to keep up with the rate of change unless you’re a full time developer (and even then, it’s not easy). This pace of change of course means trouble not only for small-time developers like us, but also for enterprise users who favour stability over bleeding-edge features.

So the recent announcement is timely, that Laravel 5.1 is the first version to offer long term support (LTS). LTS in this case means two years of bug fixes and three years of security updates (as opposed to six months and one year respectively for other releases). And for us, this means that although our version 4 tutorials quickly became obsolete, our version 5 tutorials should have a chance of remaining relevant for the next three years. So we hope this new series will be useful for you, our readers.

Without further ado, let’s dive in.

Prerequisites

These days there’s a phenomenal number of ways to get up and running with a server – Vagrant, Puppet, Chef, Ansible and so on. For the purposes of this tutorial I’m going to assume the most basic requirements:

  • Apache web server (other web servers will work, but we won’t explicitly deal with them)
  • Shell access to the server (preferably SSH)
  • Root access to install Composer globally (not essential)
  • Git must be installed in your environment.
  • PHP >= 5.5.9
  • OpenSSL PHP Extension (probably compiled in to your PHP installation – check with phpinfo();)
    OpenSSL
  • Mbstring PHP Extension
    MBString
  • Tokenizer PHP Extension
    Tokenizer

Install Composer

Composer is an integral part of Laravel these days. It’s used for managing dependencies – external libraries and the like, used by projects. It is also used to install Laravel. While logged in as root, to make Composer available globally, do:


curl -sS https://getcomposer.org/installer | php -- --install-dir=/usr/local/bin
ln -s /usr/local/bin/composer.phar /usr/local/bin/composer

The official Composer documentation suggests using mv composer.phar composer, but if you use a symbolic link instead, upgrading Composer is as simple as running curl -sS https://getcomposer.org/installer | php -- --install-dir=/usr/local/bin again.

Install Laravel

There are different ways of approaching this, but the approach I prefer (for its simplicity) is as follows. To install Laravel in the directory that will house your web project (e.g. if that’s under /var/www), enter:

composer create-project laravel/laravel /var/www/new.website.name

There will be a lot of activity in the console as all Laravel’s various components are installed. The new website directory contains a folder “public” and it’s to this you need to direct your web server. So for example, with Apache, create a new configuration file /etc/apache2/sites-available/new.website.name.conf:

<VirtualHost *:80>
ServerName new.website.name
DocumentRoot "/var/www/new.website.name/public"
<Directory "/var/www/new.website.name/public">
allow from all
Options +Indexes
</Directory>
</VirtualHost>

Again, for Apache, enable the new website (e.g.):

a2ensite new.website.name

If you’re using a control panel (CPanel, Plesk, VirtualMin, etc.) your steps will vary. When you then browse to your new site, you should see something like this:

Laravel 5

Configuration

There’s lots you can configure, but here are some basics.

  • Make sure the storage and the bootstrap/cache directories are writeable by the web server. E.g.:
    chown -R www-data:www-data /var/www/new.website.name/storage
    chown -R www-data:www-data /var/www/new.website.name/bootstrap/cache
    find /var/www/new.website.name/storage -type f -exec chmod ug+rw {} \;
    find /var/www/new.website.name/storage -type d -exec chmod ug+rwx {} \;
    find /var/www/new.website.name/bootstrap/cache -type f -exec chmod ug+rw {} \;
    find /var/www/new.website.name/bootstrap/cache -type d -exec chmod ug+rwx {} \;
  • In config/app.php set your time zone (e.g.):
    'timezone' => 'Europe/London',
  • And locale (e.g.):
    'locale' => 'en_GB',

Pretty straightforward stuff really.

HP WSD printer port type screws up Windows Server 2012 domain controllers!

No response from serverI don’t think this can be that uncommon a scenario: a Windows Server 2008 R2 domain, with mainly HP printers. New domain controller added (at new site), this time running Windows Server 2012 R2; HP printers there too.

This was the position I found myself in earlier this year. On paper, there’s nothing unusual about this set-up. Adding new 2012 DCs and standard HP workgroup printers shouldn’t be a problem. That’s what we all thought.

Until the domain controller started becoming non-responsive.

Cue many, many hours on TechNet and various other similar sites, chasing down what I became increasingly sure must be some latent fundamental corruption in Active Directory (horrors!), revealed only by the introduction of the newer o/s. There were many intermediate hypotheses. At one point, we thought maybe it was because we were running a single DC (and it was lonely). Or that the DC was not powerful enough for its file serving and DFS replication duties. So I provisioned a second DC. Ultimately I failed all services over to that because the first DC was needing increasingly frequent reboots.

And then the second domain controller developed the same symptom.

Apart from the intermittent loss of replication and certain other domain duties, the most obvious symptom was that the domain controller could no longer initiate DNS queries from a command prompt. Regardless of which DNS server you queried. Observe:

C:\Users\rob>nslookup bbc.com
Server: UnKnown
Address: 192.168.1.1

*** UnKnown can’t find bbc.com: No response from server

C:\Users\rob>nslookup bbc.com 192.168.1.2
Server: UnKnown
Address: 192.168.1.2

*** UnKnown can’t find bbc.com: No response from server

C:\Users\rob>nslookup bbc.com 8.8.8.8
Server: UnKnown
Address: 8.8.8.8


*** UnKnown can't find bbc.com: No response from server

Bonkers, right? Half the time, restarting AD services (which in turn restarts file replication, Kerberos KDC, intersite messaging and DNS) brought things back to life. Half the time it didn’t, and a reboot was needed. Even more bonkers, querying the DNS server on the failing domain controller worked, from any other machine. DNS server was working, but the resolver wasn’t (so it seemed).

I couldn’t figure it out. Fed up, I turned to a different gremlin – something I’d coincidentally noticed in the System event log a couple of weeks back.

Ephemeral port exhaustion

Event ID 4266, with the ominous message “A request to allocate an ephemeral port number from the global UDP port space has failed due to all such ports being in use.”

What the blazes is an ephemeral port? I’m just a lowly Enterprise Architect. Don’t come at me with your networking mumbo jumbo.

Oh wait, hang on a minute. Out of UDP ports? DNS, that’s UDP, right?

With the penny slowly dropping, I turned back to the command line. netstat -anob lists all current TCP/IP connections, including the name of the executable (if any) associated to the connection. When I dumped this to a file I quickly noticed literally hundreds of lines like this:

TCP 0.0.0.0:64838 0.0.0.0:0 LISTENING 4244
[spoolsv.exe]

As it happened, this bit of research coincided with the domain controller being in its crippled state. So I restarted the Print Spooler service, experimentally. Lo and behold, the problem goes away. Now we’re getting somewhere.

Clearly something in the printer subsystem is grabbing lots of ports. Another bell rang – I recalled when installing printers on these new domain controllers that instead of TCP/IP ports, I ended up with WSD ports.

WSD ports

What on earth is a WSD port?! (Etc.)

So these WSD ports are a bit like the Bonjour service, enabling computers to discover services advertised on the network. Not at all relevant to a typical Active Directory managed workspace, where printers are deployed through Group Policy. WSD ports (technically monitors, not ports) are however the default for many new printer installations, in Windows 8 and Server 2012. And as far as I can tell, they have no place in an enterprise environment.

Anyway, to cut a long story short (no, I didn’t did I, this is still a long story, sorry!), I changed all the WSD ports to TCP/IP ports. The problem has gone away. Just like that.

I spent countless hours trying to fix these domain controllers. I’m now off to brick the windows* at Microsoft and HP corporate headquarters.

Hope this saves someone somewhere the same pain I experienced.

Peace out.

*Joke

How-to: Ultra-secure VNC to computer on home network via Synology NAS using SSH tunnel

Introduction

Copyright Jösé https://www.flickr.com/people/raveneye/
Copyright Jösé

As you may know from other articles here, I have a Synology DS214Play NAS, and I’m a big fan. It can do so much for so little money. Well, today I’m going to show you a new trick – it will work for most Synology models.

There are a few different ways of remotely connecting to and controlling computers on your home network. LogMeIn used to be a big favourite, until they discontinued the free version. TeamViewer is really the next best thing, but I find it pretty slow and erratic in operation. It’s also not free for commercial use, whereas the system I describe here is completely free.

Many people extol the virtues of VNC, but it does have a big drawback in terms of security, with various parts of the communication being transmitted unencrypted over the network. That’s obviously a bit of a no-no.

The solution is to set up a secure SSH tunnel first. Don’t worry if you don’t know what that means. Just think about this metaphor: imagine you had your own private tunnel, from your home to your office, with locked gates at either end. There are no other exits from this tunnel. So no one can peek into it to see what traffic (cars) is coming and going. An SSH tunnel is quite like that. You pass your VNC “traffic” (data) through this tunnel and it is then inaccessible to any prying eyes.

Assumptions

This guide assumes the following things:

  1. You have a Synology NAS, with users and file storage already configured.
  2. You have at home a Windows computer that is left switched on and connected to your home network while you’re off-site.
  3. Your home PC has a static IP address (or a DHCP reservation).
  4. You have some means of knowing your home’s IP address. In my case, my ISP has given me a static IP address, but you can use something like noip.com if you’re on a dynamic address. (Full instructions are available at that link.)
  5. You can redirect ports on your home router and ideally add firewall rules.
  6. You are able to use vi/vim. Sorry, but that knowledge is really beyond the scope of this tutorial, and you do need to use vi to edit config files on your NAS.
  7. You have a public/private key pair. If you’re not sure what that means, read this.

Install VNC

There are a few different implementations of VNC. I prefer TightVNC for various reasons – in particular it has a built-in file transfer module.

When installing TightVNC on your home PC, make sure you enable at least the “TightVNC Server” option.

TightVNC_01

Check all the boxes in the “Select Additional Tasks” window.

TightVNC_02

You will be prompted to create “Remote Access” and “Administrative” passwords. You should definitely set the remote access password, otherwise anyone with access to your home network (e.g. someone who might have cracked your wireless password) could easily gain access to your PC.

TightVNC_03

At work, you’ll just need to install the viewer component.

Configure Synology SSH server

Within Synology DiskStation Manager, enable the SSH server. In DSM 5, this option is found at Control Panel > System > Terminal & SNMP > Terminal.

DSM_01

I urge you not to enable Telnet, unless you really understand the risks.

Next, login to your NAS as root, using SSH. Normally I would use PuTTY for this purpose.

You’ll be creating an SSH tunnel using your normal day-to-day Synology user. (You don’t normally connect using admin do you? Do you?!) Use vi to edit /etc/passwd. Find the line with your user name and change /sbin/nologin to /bin/sh. E.g.:

rob:x:1026:100:Rob:/var/services/homes/rob:/bin/sh

Save the changes.

As part of this process, we are going to make it impossible for root to log in. This is a security advantage. Instead if you need root permissions, you’ll log in as an ordinary user and use “su” to escalate privileges. su doesn’t work by default. You need to add setuid to the busybox binary. If you don’t know what that means, don’t worry. If you do know what this means and it causes you concern, let me just say that according to my tests, busybox has been built in a way that does not allow users to bypass security via the setuid flag. So, run this command:

chmod 4755 /bin/busybox

Please don’t proceed until you’ve done this bit, otherwise you can lock root out of your NAS.

Next, we need to edit the configuration of SSH. We have to uncomment some lines (that normally being with #) and change the default values. So use vi to edit /etc/ssh/sshd_config. The values you need to change should end up looking like this, with no # at the start of the lines:

AllowTcpForwarding yes
LoginGraceTime 5m
MaxAuthTries 6
PubkeyAuthentication yes
AuthorizedKeysFile .ssh/authorized_keys

In brief these changes do the following, line by line:

  1. Allow using SSH to go from the SSH server (the NAS box) to another machine (e.g. your home PC)
  2. If you foul up your login password loads of times, restrict further attempts for 5 minutes.
  3. Give you 6 attempts before forcing you to wait to retry your logon.
  4. Allow authentication using a public/private key pair (which can enable password-less logons).
  5. Point the SSH daemon to the location of the list of authorized keys – this is relative to an individual user’s home directory.

Having saved these changes, you can force SSH to load the new configuration by uttering the following somewhat convoluted and slightly OCD incantation (OCD, because I hate leaving empty nohup.out files all over the place). We use nohup, because nine times out of ten this stops your SSH connection from dropping while the SSH daemon reloads:

nohup kill -HUP `cat /var/run/sshd.pid`; rm nohup.out &

SSH keys

You need to have a public/private SSH key pair. I’ve written about creating these elsewhere. Make sure you keep your private key safely protected. This is particularly important if, like me, you use an empty key phrase, enabling you to log on without a password.

In your home directory on the Synology server, create (if it doesn’t already exist) a directory, “.ssh”. You may need to enable the display of hidden files/folders, if you’re doing this from Windows.

Within the .ssh directory, create a new file “authorized_keys”, and paste in your public key. The file will then normally have contents similar to this:

ssh-rsa AAAAB3NzaC1yc2EAAAABJQAAAQEArLX5FlwhHJankhIoZcsIEgmHkOtsSR6eJINGgb4N3/4XQAHpmYPhlAy6Hg2hH8VqNLXgkVia+yMDaDOFQKXX6Ue+hOQt7Q5zB3W1NgVCsyIn9JBu3u6R8rDPBma248DhQ3yfac1iEZWa+3BrHaIM2dLVGu99C5z3Kh1NhDB83xetq08bHayzv39wuwZUZOohDzsCK29ZaEYU9ZctN/RZR4rW7A7odJbbgqG82IZXhUhiam2utpjszLJ+sMOw6z7tcpgnF5CLDys2xvE6ekLjEPA2b9KkrU6e+ILXM85s7/HP9aTkTwFyyBcPAvmO7i0xYyotu58DKf++nM2ZtpNBPQ== Rob's SSH key

This is all on a single line. For RSA keys, the line must begin ssh-rsa.

SSH is very fussy about file permissions. Log in to the NAS as root and then su to your normal user (e.g. su rob). Make sure permissions for these files are set correctly with the following commands:

chmod 755 ~
chmod 700 ~/.ssh
chmod 600 ~/.ssh/authorized_keys

If you encounter any errors at this point, make sure you fix them before proceeding. Now test your SSH login. If it works and you can also su to root, you can now safely set the final two settings in sshd_config:

PermitRootLogin no
PasswordAuthentication no

The effect of these:

  • Disallow direct logging in by root.
  • Disallow ordinary password-based logging in.

Reload SSH with nohup kill -HUP `cat /var/run/sshd.pid`; rm nohup.out & as before.

Setting up your router

There are so many routers out there that I can’t really help you out with this one. You will need to port forward a port number of your choosing to port 22 on your Synology NAS. If you’re not sure where to start, Port Forward is probably the most helpful resource on the Internet.

I used a high-numbered port on the outer edge of my router. I.e. I forwarded port 53268 to port 22 (SSH). This is only very mild protection, but it does reduce the number of script kiddie attacks. To put that in context, while I was testing this process I just forwarded the normal port 22 to port 22. Within 2 minutes, my NAS was emailing me about failed login attempts. Using a high-numbered port makes most of this noise go away.

To go one better however, I also used my router’s firewall to prevent unknown IP addresses from connecting to SSH. Since I’m only ever doing this from work, I can safely limit this to the IP range of my work’s leased line. This means it’s highly unlikely anyone will ever be able to brute force their way into my SSH connection, if I ever carelessly leave password-based logins enabled.

Create a PuTTY configuration

I recommend creating a PuTTY configuration using PuTTY’s interface. This is the easiest way of setting all the options that will later be used by plink in my batch script. plink is the stripped down command-line interface for PuTTY.

Within this configuration, you need to set:

  1. Connection type: SSH
  2. Hostname and port: your home external IP address (or DNS name) and the port you’ve forwarded through your router, preferably a non-standard port number.
  3. Connection > Data > Auto-login username: Put your Synology user name here.
    PuTTY_02
  4. Connection > SSH > Auth > Private key file for identification: here put the path to the location of your private key on your work machine, from where you’ll be initiating the connection back to home.
  5. Connection > SSH > Tunnels: This bears some explanation. When you run VNC viewer on your work machine, you’ll connect to a port on your work machine. PuTTY forwards this then through the SSH tunnel. So here you need to choose a random “source port” (not the normal VNC port, if you’re also running VNC server on your work machine). This is the port that’s opened on your work machine. Then in the destination, put the LAN address of your home PC and add the normal VNC port. Like this:
    PuTTY_01
    Make sure you click Add.
  6. Finally, go back to Session, type a name in the “Saved Session” field and click “Save”. You will then be able to use this configuration with plink for a fully-automatic login and creation of the SSH tunnel.
    PuTTY_03

Now would be a good time to fire up this connection and check that you can login okay, without any password prompts or errors.

Using username "rob".
Authenticating with public key "Rob's SSH key"

BusyBox v1.16.1 (2014-05-29 11:29:12 CST) built-in shell (ash)
Enter ‘help’ for a list of built-in commands.


RobNAS1>

Making VNC connection

I would suggest keeping your PuTTY session open while you’re setting up and testing your VNC connection through the tunnel. This is really the easy bit, but there a couple of heffalump pits, which I’ll now warn you about. So, assuming your VNC server is running on your home PC and your SSH tunnel is up, let’s now configure the VNC viewer at the work end. Those heffalump pits:

  1. When you’re entering the “Remote Host”, you need to specify “localhost” or “127.0.0.1”. You’re connecting to the port on your work machine. Don’t enter your work machine’s LAN ip address – PuTTY is not listening on the LAN interface, just on the local loopback interface.
  2. You need to specify the random port you chose when configuring tunnel forwarding (5990 in my case) and you need to separate that from “localhost” with double colons. A single colon is used for a different purpose, so don’t get tripped up by this subtle semantic difference.

TightVNC_04

If you have a working VNC session at this point, congratulations! That’s the hard work out of the way.

It would be nice to automate the whole connection process. While you have your VNC session established, it is worth saving a VNC configuration file, so you can use this in a batch script. Click the VNC logo in the top left of the VNC session, then “Save session to a .vnc file”. You have the option to store the VNC password in this file, which I’ve chosen to do.

TightVNC_05

Before saving the session, you might want to tweak some optimization settings. This will really vary depending on your preferences and the speed of your connection. On this subject, this page is worth a read. I found I had the best results when using Tight encoding, with best compression (9), JPEG quality 4 and Allow CopyRect.

One batch script to rule them all

To automate the entire process, bringing up the tunnel and connecting via VNC, you might like to amend the following batch script to fit your own environment:

@echo off
start /min "SSH tunnel home" "C:\Program Files (x86)\PuTTY\plink.exe" -N -batch -load "Home with VNC tunnel"
REM Use ping to pause for 2 seconds while connection establishes
ping -n 2 localhost > NUL
"C:\Batch scripts\HomePC.vnc"

I suggest creating a shortcut to this batch file in your Start menu and setting its properties such that it starts minimised. While your SSH tunnel is up, you will have a PuTTY icon on your task bar. Don’t forget to close this after you close VNC, to terminate the tunnel link. An alternative approach is to use the free tool MyEnTunnel to ensure your SSH tunnel is always running in the background if that’s what you want. I’ll leave that up to you.

DSM Upgrades

After a DSM upgrade, you may find that your SSH config resets and you can no longer use VNC remotely. In that eventuality, log into your NAS via your LAN (as admin) and change the config back as above. You’ll also probably need to chmod busybox again.

root locked out of SSH?

For the first time in my experience, during May 2015, a DSM upgrade reset the suid bit on Busybox (meaning no more su), but didn’t reset the PermitRootLogin setting. That meant that root could not log in via SSH. Nor could you change to root (using su). If you find yourself in this position, follow these remedial steps:

  1. Go to Control Panel > Terminal & SNMP
  2. Check the “Enable Telnet service” box.
  3. Click “Apply”.
  4. Log in as root, using Telnet. You can either use PuTTY (selecting Telnet/port 23 instead of SSH/port 22) or a built-in Telnet client.
  5. At the prompt, enter chmod 4755 /bin/busybox.
  6. Go to Control Panel > Terminal & SNMP
  7. Uncheck the “Enable Telnet service” box.
  8. Click “Apply”.

Do make sure you complete the whole process; leaving Telnet enabled is a security risk, partly because passwords are sent in plain text, which is a Very Bad Thing.

Conclusion

So, what do you think? Better than LogMeIn/TeamViewer? Personally I prefer it, because I’m no longer tied to a third party service. There are obvious drawbacks (it’s harder to set up for a start! and if you firewall your incoming SSH connection, you can’t use this from absolutely anywhere on the Internet) but I like it for its benefits, including what I consider to be superior security.

Anyway, I hope you find this useful. Until next time.

Subway Tunnel image copyright © Jösé, licensed under Creative Commons. Used with permission.

How-to: Laravel 4 tutorial; part 6 – virtualised development environment – Laravel Homestead

[easyreview title=”Complexity rating” icon=”geek” cat1title=”Level of experience required, to follow this how-to.” cat1detail=”We’re pulling together a few sophisticated components here, but keep your eye on the ball and you’ll be okay.” cat1rating=”4″ overall=”false”]

Laravel Tutorials

Introduction

It has been a while since I have had time to work on Laravel development or indeed write a tutorial. And since then, I have decommissioned my main web development server in favour of a Synology NAS. Dummy and I use a third party hosting service for hosting our clients’ web sites and our own. This shared hosting service comes with limitations that make it impossible to install Laravel through conventional means

So instead, I’m setting up a virtual development environment, that will run on the same laptop that I use for code development. Once development is complete, I can upload the whole thing to the shared hosting service. Getting this set up is surprisingly complicated, but once you’ve worked through all these steps, you’ll have a flexible and easy-to-use environment for developing and testing your Laravel websites. This article assumes we’re running Windows.

Components

  • virtualbox_iconVirtualBox enables you to run other operating systems on existing hardware, without wiping anything out. Your computer can run Windows and then through VirtualBox, it can run one or more other systems at the same time. A computer within a computer. Or in this case, a server within a computer. Download VirtualBox here and install it.
  • vagrant_iconVagrant enables the automation of much of the process of creating these “virtual computers” (usually called virtual machines). Download Vagrant here and install it.
  • git_iconGit for Windows. If you’re a developer, chances are you know a bit about Git, so I’ll not go into detail here. Suffice it to say that you’ll need Git for Windows for this project: here.
  • PuTTY_iconPuTTY comes with an SSH key pair generator, which you’ll need if you don’t already have a public/private key pair. Grab the installer here.
  • php_iconPHP for Windows. This is not used for powering your websites, but is used by Composer (next step). I suggest downloading the “VC11 x86 Thread Safe” zip file from here. Extract all files to C:\php. There’s no setup file for this. Rename the file php.ini-development to php.ini and remove the semicolon from the line ;extension=php_openssl.dll. Find a line containing “;extension_dir” and change it to extension_dir = "ext"
  • composer_iconComposer for Windows. Composer is a kind of software component manager for PHP and we use it to install and set up Laravel. Download the Windows installer here and install.

SSH key pair

You’ll need an SSH key pair later in this tutorial. If you don’t already have this, generate as follows:

  1. Start PuTTY Key Generator. (It may be called “PuTTYgen” in your Start Menu.)
  2. I would suggest accepting the defaults of a 2048-bit SSH-2 RSA key pair. Click “Generate” and move your mouse around as directed by the program.
    PuTTY key generation
  3. You can optionally give the key a passphrase. If you leave the passphrase blank, you can ultimately use your key for password-less logins. The drawback is that if your private key ever falls into the wrong hands, an intruder can use the key in the same way. Make your decision, then save the public and private keys. You should always protect your private key. If you left the passphrase blank, treat it like a plain text password. I tend to save my key pairs in a directory .ssh, under my user folder.
  4. Use the “Save private key” button to save your public key (I call it id_rsa).
  5. Don’t use the “Save public key” button – that produces a key that won’t work well in a Linux/Unix environment (which your virtual development box will be). Instead, copy the text from the “Key” box, under where it says “Public key for pasting into OpenSSH authorized_keys file:”. Save this into a new text file. I call my public key file “id_rsa.pub”.

Install the Laravel Installer (sounds clumsy, huh?!)

  1. Load Git Bash.
    Git Bash window
  2. Download the Laravel installer with this command: composer global require "laravel/installer=~1.1". This will take a few minutes, depending on the speed of your connection.
  3. Ideally, you want the Laravel executable in your system path. On Windows 7/8, from Windows/File Explorer, right-click “My Computer”/”This PC”, then click Properties. Click Advanced System Settings. Click the Environment Variables button. Clik Path in the System variables section (lower half of the dialogue) then click Edit. At the very end of the Variable value field, add “;%APPDATA%\Composer\vendor\bin”.
    Set PATH
    Click OK as needed to save changes. Git Bash won’t have access to that new PATH variable until you’ve exited and restarted.

Create Laravel Project

All your Laravel projects will be contained and edited within your Windows file system. I use NetBeans for development and tend to keep my development sites under (e.g.): C:\Users\Geek\Documents\NetBeansProjects\Project World Domination. Create this project as follows:

  1. Fire up Git Bash. This makes sure everything happens in the right place. The remaining commands shown below are from this shell.
  2. Change to the directory above wherever you want the new project to be created.
    cd ~/NetBeansProjects
  3. Install Laravel:
    laravel new "Project World Domination"
    Note: if the directory “Project World Domination” already exists, this command will fail with an obscure error.
  4. That’s it for this stage. Were you expecting something more complicated?

Laravel Homestead

Homestead is a pre-built development environment, consisting of Ubuntu, a web server, PHP, MySQL and a few other bits and bobs. It’s a place to host your Laravel websites while you’re testing them locally. Follow these steps to get it up and running:

  1. From a Git Bash prompt, change to your user folder. Make sure this location has sufficient space for storing virtual machines (800MB+ free).
    cd ~
  2. Make the Homestead Vagrant “box” available to your system.
    vagrant box add laravel/homestead
    This downloads 800MB or so and may take a while.
  3. Clone the Homestead repository into your user folder.
    git clone https://github.com/laravel/homestead.git Homestead
    This should be pretty quick and results in a new Homestead folder containing various scripts and configuration items for the Homestead virtual machine.
  4. Edit the Homestead.yaml file inside the Homestead directory. In the section “authorize”, enter the path to your public SSH key (see above). Similarly, enter the path to your private key in the “keys” section.
  5. Vagrant can easily synchronise files between your PC and the virtual machine. Any changes in one place instantly appear in the other. So you could for example in the “folders” section, map C:\Users\Fred\Code (on your Windows machine) to /home/vagrant/code (on the virtual machine). In my case, I’ve got this:
    folders:
    - map: ~/Documents/NetBeansProjects
    to: /home/vagrant/Projects
  6. We’re going to create a fake domain name for each project. Do something like this in the Homestead.yaml file:
    sites:
    - map: pwd.localdev
    to: /home/vagrant/Projects/Project World Domination/public

    Of course, if you put “http://pwd.localdev” in your browser, it will have no idea where to go. See the next section “Acrylic DNS proxy” for the clever trick that will make this all possible.
  7. To fire up the Homestead virtual environment, issue the command vagrant up from the Homestead directory. This can take a while and may provoke a few security popups.

Here’s the complete Homestead.yaml file:

---
ip: "192.168.10.10"
memory: 2048
cpus: 1

authorize: ~/.ssh/id_rsa.pub

keys:
- ~/.ssh/id_rsa

folders:
- map: ~/Documents/NetBeansProjects
to: /home/vagrant/Projects

sites:
- map: pwd.localdev
to: /home/vagrant/Projects/Project World Domination/public

variables:
- key: APP_ENV
value: local

At this point, you should be able to point your browser to http://127.0.0.1:8000. If you have created a Laravel project as above, and everything has gone well, you’ll see the standard Laravel “you have arrived” message. The Homestead virtual machine runs the Nginx webserver and that webserver will by default give you the first-mentioned web site if you connect by IP address.

Laravel landing page

VirtualBox is configured to forward port 8000 on your computer to port 80 (the normal port for web browsing) on the virtual machine. In most cases, you can connect directly to your virtual machine instead of via port forwarding. You’ll see in the Homestead.yaml file that the virtual machine’s IP address is set to 192.168.10.10. So generally (if there are no firewall rules in the way), you can browse to http://127.0.0.1:8000 or http://192.168.10.10 (the port number 80 is assumed, if omitted). Both should work. Personally I prefer the latter.

Acrylic DNS proxy

Of course we want to be able to host multiple development websites on this virtual machine. To do this, you need to be able to connect to the web server by DNS name (www.geekanddummy.com), not just by IP address. Many tutorials on Homestead suggest editing the native Windows hosts file, but to be honest that can be a bit of a pain. Why? Because you can’t use wildcards. So your hosts file ends up looking something like this:


127.0.0.1 pwd.local
127.0.0.1 some.other.site
127.0.0.1 override.com
127.0.0.1 webapp1.local
127.0.0.1 webapp2.local

(If you’re using 192.168.10.10, just replace 127.0.0.1 in the above example.) So this can go on and on, if you’re developing a load of different sites/apps on the same Vagrant box. Wouldn’t it be nice if you could just put a single line, 127.0.0.1 *.local? This simply doesn’t work in a Windows hosts file.

And this is where the Acrylic DNS proxy server comes in. It has many other great features, but the one I’m particularly interested in is the ability to deal with wildcard entries. All DNS requests go through Acrylic and any it can’t respond to, it sends out to whichever other DNS servers you configure. So it sits transparently between your computer and whatever other DNS servers you normally use.

The Acrylic website has instructions for Windows OSes – you have to configure your network to use Acrylic instead of any other DNS server. Having followed those instructions, we’re now most interested in is the Acrylic hosts file. You should have an entry in your Start menu saying “Edit Acrylic hosts file”. Click that link to open the file.

Into that file, I add a couple of lines (for both scenarios, port forwarding and direct connection, so that both work):

127.0.0.1 *.localdev
192.168.10.10 *.localdev

I prefer using *.localdev, rather than *.local for technical reasons (.local has some peculiarities).

This now means that I can now put in my Homestead.yaml file the following:

sites:
- map: site1.localdev
to: /home/vagrant/Projects/site1/public
- map: site2.localdev
to: /home/vagrant/Projects/site2/public
- map: site3.localdev
to: /home/vagrant/Projects/site3/public
- map: site4.localdev
to: /home/vagrant/Projects/site4/public
- map: site5.localdev
to: /home/vagrant/Projects/site5/public
- map: site6.localdev
to: /home/vagrant/Projects/site6/public
- map: site7.localdev
to: /home/vagrant/Projects/site7/public

and they will all work. No need to add a corresponding hosts file entry for each web site. Just create your Laravel project at each of those directories.

Managing MySQL databases

I would recommend managing your databases by running software on your laptop that communicates with the MySQL server on the virtual machine. Personally I would use MySQL Workbench, but some people find HeidiSQL easier to use. HeidiSQL can manage PostgreSQL and Microsoft SQL databases too. You can connect via a forwarded port. If you wish to connect directly to the virtual machine, you’ll need to reconfigure MySQL in the virtual machine, as follows:

  1. Start the Git Bash prompt
  2. Open a shell on the virtual machine by issuing the command vagrant ssh
  3. Assuming you know how to use vi/vim, type vim /etc/my.cnf. If you’re not familiar with vim, try nano, which displays help (keystrokes) at the bottom of the terminal: nano /etc/my.cnf
  4. Look for the line bind-address = 10.0.2.15 and change it to bind-address = *
  5. Save my.cnf and exit the editor.
  6. Issue the command service mysql restart
  7. You can now connect to MySQL using the VM’s normal MySQL port. Exit the shell with Ctrl-D or exit.

Okay, okay, why go to all this trouble? I just prefer it. So sue me.

Forwarded port Direct to VM
Port: 33060
Host: 127.0.0.1
User name: homestead
Password: secret
Port: 3306
Host: 192.168.10.10
User name: homestead
Password: secret

Managing your environment

Each time you change the Homestead.yaml file, run the command vagrant provision from the Homestead directory, to push the changes through to the virtual machine. And once you’ve finished your development session, run vagrant suspend, to pause the virtual machine. (vagrant up starts it again.) If you want to tear the thing apart and start again, run the command vagrant destroy followed by vagrant up.

How-to: Use custom DocumentRoot when hosting web sites on a Synology NAS

Synology DS214playI’ve recently taken the plunge and invested in a Synology NAS – the powerful DS214Play. Some of my colleagues have been raving about Synology’s NASes for a while and I thought it was about time I saw what all the fuss was about. This how-to article is not the place for a detailed review so suffice it to say I have been thoroughly impressed – blown away even – by the DS214Play.

The NAS is taking over duties from my aging IBM xSeries tower server. The server is a noisy, power-hungry beast and I think Mrs Geek will be quite happy to see the back of it. Life with a technofreak, eh. One of the duties to be replaced is hosting a few lightly-loaded web apps and development sites.

The NAS has fairly straightforward web hosting capabilities out of the box. Apache is available and it’s a cinch to set up a new site and provision a MySQL database. Problems arise however when you try to do anything out of the ordinary. Synology improves the NAS’s capabilities with every iteration of DSM (DiskStation Manager, the web interface), but at the time of writing, version 5.0-4482 of DSM doesn’t allow much fine tuning of your web site’s configuration.

A particular issue for anyone who works with web development frameworks (Laravel, CodeIgniter, CakePHP and the like) is that it’s really bad practice to place the framework’s code within the web root. I usually adopt the practice of putting the code in sibling directories. So, in the case of CodeIgniter for example, within a higher-level directory, you’ll have the system, application and public_html directories all in one place. Apache is looking to public_html to load the web site and then typically the index.php file will use PHP code inclusion to bootstrap the framework from directories not visible to the web server. Much better for security.

DSM doesn’t currently provide any way of customising the web root. All web sites are placed in a sub-folder directory under (e.g.) /volume1/web/. That’s it. When typing in the sub-folder field, forward and back slashes are not permitted.

Synology virtual host 01

This is my intended folder structure for an example CodeIgniter application:

volume1
|
\----web
     |
     \----test.domain.com
          |
          \-----application
          |
          \-----public_html
          |
          \-----system

Here’s how to do it.

  1. First, use the Control Panel to create your new virtual host. I give all my virtual hosts the same sub-folder name as the domain name. Here, let’s go for test.domain.com:
    Synology virtual host 02
  2. Very shortly, this new sub-folder should appear within your web share. You can place your framework in this folder. If you’re not yet ready to put the framework there, at least create the folder structure for the public folder that will equate to your Apache DocumentRoot. In my example, this would involve creating public_html within the test.domain.com directory.
    Synology virtual host 03
  3. Next, log in to your NAS as root, using SSH. We need to edit the httpd-vhost.conf-user file.cd /etc/httpd/sites-enabled-user
    vi httpd-vhost.conf-user
  4. The VirtualHost directive for your new web site will look something like this:

    ServerName test.domain.com
    DocumentRoot "/var/services/web/test.domain.com"
    ErrorDocument 403 "/webdefault/sample.php?status=403"
    ErrorDocument 404 "/webdefault/sample.php?status=404"
    ErrorDocument 500 "/webdefault/sample.php?status=500"


    Change the DocumentRoot line as required:

    ServerName test.domain.com
    DocumentRoot "/var/services/web/test.domain.com/public_html"
    ErrorDocument 403 "/webdefault/sample.php?status=403"
    ErrorDocument 404 "/webdefault/sample.php?status=404"
    ErrorDocument 500 "/webdefault/sample.php?status=500"

    Then save the file.

  5. UPDATE: Thanks to commenter oggan below for this suggestion – instead of the following direction, you can just issue the command httpd -k restart at the command line.
    There’s not a lot of information out there about causing Apache to reload its configuration files. I found that calling the RC file (/usr/syno/etc.defaults/rc.d/S97apache-sys.sh reload) didn’t actually result in the new config being used. Whatever the reason for this, you can force the new configuration to load by making some other change in the control panel for web services. For example, enable HTTPS and click Apply (you can disable again afterwards).
    Synology virtual host 04
  6. You will now find that Apache is using the correct web root, so you can continue to develop your web application as required.

NB: There’s a big caveat with this. Every time you make changes using the Virtual Host List in the Synology web interface, it will overwrite any changes you make to the httpd-vhost.conf-user user file. Until Synology makes this part of interface more powerful, you will need to remember to make these changes behind the scenes every time you add or remove a virtual host, for all hosts affected.