I don’t think this can be that uncommon a scenario: a Windows Server 2008 R2 domain, with mainly HP printers. New domain controller added (at new site), this time running Windows Server 2012 R2; HP printers there too.
This was the position I found myself in earlier this year. On paper, there’s nothing unusual about this set-up. Adding new 2012 DCs and standard HP workgroup printers shouldn’t be a problem. That’s what we all thought.
Until the domain controller started becoming non-responsive.
Cue many, many hours on TechNet and various other similar sites, chasing down what I became increasingly sure must be some latent fundamental corruption in Active Directory (horrors!), revealed only by the introduction of the newer o/s. There were many intermediate hypotheses. At one point, we thought maybe it was because we were running a single DC (and it was lonely). Or that the DC was not powerful enough for its file serving and DFS replication duties. So I provisioned a second DC. Ultimately I failed all services over to that because the first DC was needing increasingly frequent reboots.
And then the second domain controller developed the same symptom.
Apart from the intermittent loss of replication and certain other domain duties, the most obvious symptom was that the domain controller could no longer initiate DNS queries from a command prompt. Regardless of which DNS server you queried. Observe:
*** UnKnown can't find bbc.com: No response from server
Bonkers, right? Half the time, restarting AD services (which in turn restarts file replication, Kerberos KDC, intersite messaging and DNS) brought things back to life. Half the time it didn’t, and a reboot was needed. Even more bonkers, querying the DNS server on the failing domain controller worked, from any other machine. DNS server was working, but the resolver wasn’t (so it seemed).
I couldn’t figure it out. Fed up, I turned to a different gremlin – something I’d coincidentally noticed in the System event log a couple of weeks back.
Event ID 4266, with the ominous message “A request to allocate an ephemeral port number from the global UDP port space has failed due to all such ports being in use.”
What the blazes is an ephemeral port? I’m just a lowly Enterprise Architect. Don’t come at me with your networking mumbo jumbo.
Oh wait, hang on a minute. Out of UDP ports? DNS, that’s UDP, right?
With the penny slowly dropping, I turned back to the command line. netstat -anob lists all current TCP/IP connections, including the name of the executable (if any) associated to the connection. When I dumped this to a file I quickly noticed literally hundreds of lines like this:
As it happened, this bit of research coincided with the domain controller being in its crippled state. So I restarted the Print Spooler service, experimentally. Lo and behold, the problem goes away. Now we’re getting somewhere.
Clearly something in the printer subsystem is grabbing lots of ports. Another bell rang – I recalled when installing printers on these new domain controllers that instead of TCP/IP ports, I ended up with WSD ports.
What on earth is a WSD port?! (Etc.)
So these WSD ports are a bit like the Bonjour service, enabling computers to discover services advertised on the network. Not at all relevant to a typical Active Directory managed workspace, where printers are deployed through Group Policy. WSD ports (technically monitors, not ports) are however the default for many new printer installations, in Windows 8 and Server 2012. And as far as I can tell, they have no place in an enterprise environment.
Anyway, to cut a long story short (no, I didn’t did I, this is still a long story, sorry!), I changed all the WSD ports to TCP/IP ports. The problem has gone away. Just like that.
I spent countless hours trying to fix these domain controllers. I’m now off to brick the windows* at Microsoft and HP corporate headquarters.
Hope this saves someone somewhere the same pain I experienced.
When it comes to singling out and “claiming” verses of scripture, proponents of the Word of Faith movement don’t have a monopoly. From Conservative to Charismatic, Evangelical to Eastern Orthodox, Christians love clinging onto comforting extracts from the Word of God. And this is right and commendable. Continue reading “Jeremiah 29:11 – a verse out of context?”
Have you ever needed to write down a web site address, or worse – type it into a text message? And it’s something like http://www.someboguswebsite.com/this/is/a/painfully/long/url. Tedious, right? Or have you needed to paste an address into a tweet, but you’ve come up against the maximum character limit?
In the case of Twitter, chances are you’ve used Twitter’s URL shortener of choice, Bitly. In this case, the awful, long URL becomes http://bit.ly/1xXCa5h – 21 characters instead of 60. Quite a trick. So you use the shortened URL for convenience, pass it on via social media or SMS and this is magically transformed into the original URL, upon use.
Recently in my very geeky news feed, I came across Polr, a self-hosted URL shortener. What a wheeze! Grab yourself a suitable domain, and you can poke your tongue out at Twitter, Google and the like, with all their evil data-mining ways.
It was surprisingly easy to get up and running with Polr, in our case using a virtual server hosted with Amazon. We bought a nifty little domain, gd1.uk and off we go! To be honest, the most time-consuming part was tracking down a short domain name – there aren’t many about.
All this is a roundabout way of saying, please feel free to use our brand shiny new URL shortener. Because it’s so young, the URLs generated really are very short. https://geekanddummy.com for example is now http://gd1.uk/1 – just 15 of your precious characters.
Yes, we’ve had to put adverts on it. Server hosting ain’t free. But we won’t charge you for using the service and we have no wicked designs on your data. Promise.
So go to it. Bookmark gd1.uk and enjoy the majesty, the awe of the world’s best* URL shortener. GD1 – it’s a good one.
As you may know from other articles here, I have a Synology DS214Play NAS, and I’m a big fan. It can do so much for so little money. Well, today I’m going to show you a new trick – it will work for most Synology models.
There are a few different ways of remotely connecting to and controlling computers on your home network. LogMeIn used to be a big favourite, until they discontinued the free version. TeamViewer is really the next best thing, but I find it pretty slow and erratic in operation. It’s also not free for commercial use, whereas the system I describe here is completely free.
Many people extol the virtues of VNC, but it does have a big drawback in terms of security, with various parts of the communication being transmitted unencrypted over the network. That’s obviously a bit of a no-no.
The solution is to set up a secure SSH tunnel first. Don’t worry if you don’t know what that means. Just think about this metaphor: imagine you had your own private tunnel, from your home to your office, with locked gates at either end. There are no other exits from this tunnel. So no one can peek into it to see what traffic (cars) is coming and going. An SSH tunnel is quite like that. You pass your VNC “traffic” (data) through this tunnel and it is then inaccessible to any prying eyes.
Assumptions
This guide assumes the following things:
You have a Synology NAS, with users and file storage already configured.
You have at home a Windows computer that is left switched on and connected to your home network while you’re off-site.
Your home PC has a static IP address (or a DHCP reservation).
You have some means of knowing your home’s IP address. In my case, my ISP has given me a static IP address, but you can use something like noip.com if you’re on a dynamic address. (Full instructions are available at that link.)
You can redirect ports on your home router and ideally add firewall rules.
You are able to use vi/vim. Sorry, but that knowledge is really beyond the scope of this tutorial, and you do need to use vi to edit config files on your NAS.
You have a public/private key pair. If you’re not sure what that means, read this.
Install VNC
There are a few different implementations of VNC. I prefer TightVNC for various reasons – in particular it has a built-in file transfer module.
When installing TightVNC on your home PC, make sure you enable at least the “TightVNC Server” option.
Check all the boxes in the “Select Additional Tasks” window.
You will be prompted to create “Remote Access” and “Administrative” passwords. You should definitely set the remote access password, otherwise anyone with access to your home network (e.g. someone who might have cracked your wireless password) could easily gain access to your PC.
At work, you’ll just need to install the viewer component.
Configure Synology SSH server
Within Synology DiskStation Manager, enable the SSH server. In DSM 5, this option is found at Control Panel > System > Terminal & SNMP > Terminal.
I urge you not to enable Telnet, unless you really understand the risks.
Next, login to your NAS as root, using SSH. Normally I would use PuTTY for this purpose.
You’ll be creating an SSH tunnel using your normal day-to-day Synology user. (You don’t normally connect using admin do you? Do you?!) Use vi to edit /etc/passwd. Find the line with your user name and change /sbin/nologin to /bin/sh. E.g.:
As part of this process, we are going to make it impossible for root to log in. This is a security advantage. Instead if you need root permissions, you’ll log in as an ordinary user and use “su” to escalate privileges. su doesn’t work by default. You need to add setuid to the busybox binary. If you don’t know what that means, don’t worry. If you do know what this means and it causes you concern, let me just say that according to my tests, busybox has been built in a way that does not allow users to bypass security via the setuid flag. So, run this command:
chmod 4755 /bin/busybox
Please don’t proceed until you’ve done this bit, otherwise you can lock root out of your NAS.
Next, we need to edit the configuration of SSH. We have to uncomment some lines (that normally being with #) and change the default values. So use vi to edit /etc/ssh/sshd_config. The values you need to change should end up looking like this, with no # at the start of the lines:
In brief these changes do the following, line by line:
Allow using SSH to go from the SSH server (the NAS box) to another machine (e.g. your home PC)
If you foul up your login password loads of times, restrict further attempts for 5 minutes.
Give you 6 attempts before forcing you to wait to retry your logon.
Allow authentication using a public/private key pair (which can enable password-less logons).
Point the SSH daemon to the location of the list of authorized keys – this is relative to an individual user’s home directory.
Having saved these changes, you can force SSH to load the new configuration by uttering the following somewhat convoluted and slightly OCD incantation (OCD, because I hate leaving empty nohup.out files all over the place). We use nohup, because nine times out of ten this stops your SSH connection from dropping while the SSH daemon reloads:
You need to have a public/private SSH key pair. I’ve written about creating these elsewhere. Make sure you keep your private key safely protected. This is particularly important if, like me, you use an empty key phrase, enabling you to log on without a password.
In your home directory on the Synology server, create (if it doesn’t already exist) a directory, “.ssh”. You may need to enable the display of hidden files/folders, if you’re doing this from Windows.
Within the .ssh directory, create a new file “authorized_keys”, and paste in your public key. The file will then normally have contents similar to this:
This is all on a single line. For RSA keys, the line must begin ssh-rsa.
SSH is very fussy about file permissions. Log in to the NAS as root and then su to your normal user (e.g. su rob). Make sure permissions for these files are set correctly with the following commands:
If you encounter any errors at this point, make sure you fix them before proceeding. Now test your SSH login. If it works and you can also su to root, you can now safely set the final two settings in sshd_config:
PermitRootLogin no
PasswordAuthentication no
The effect of these:
Disallow direct logging in by root.
Disallow ordinary password-based logging in.
Reload SSH with nohup kill -HUP `cat /var/run/sshd.pid`; rm nohup.out & as before.
Setting up your router
There are so many routers out there that I can’t really help you out with this one. You will need to port forward a port number of your choosing to port 22 on your Synology NAS. If you’re not sure where to start, Port Forward is probably the most helpful resource on the Internet.
I used a high-numbered port on the outer edge of my router. I.e. I forwarded port 53268 to port 22 (SSH). This is only very mild protection, but it does reduce the number of script kiddie attacks. To put that in context, while I was testing this process I just forwarded the normal port 22 to port 22. Within 2 minutes, my NAS was emailing me about failed login attempts. Using a high-numbered port makes most of this noise go away.
To go one better however, I also used my router’s firewall to prevent unknown IP addresses from connecting to SSH. Since I’m only ever doing this from work, I can safely limit this to the IP range of my work’s leased line. This means it’s highly unlikely anyone will ever be able to brute force their way into my SSH connection, if I ever carelessly leave password-based logins enabled.
Create a PuTTY configuration
I recommend creating a PuTTY configuration using PuTTY’s interface. This is the easiest way of setting all the options that will later be used by plink in my batch script. plink is the stripped down command-line interface for PuTTY.
Within this configuration, you need to set:
Connection type: SSH
Hostname and port: your home external IP address (or DNS name) and the port you’ve forwarded through your router, preferably a non-standard port number.
Connection > Data > Auto-login username: Put your Synology user name here.
Connection > SSH > Auth > Private key file for identification: here put the path to the location of your private key on your work machine, from where you’ll be initiating the connection back to home.
Connection > SSH > Tunnels: This bears some explanation. When you run VNC viewer on your work machine, you’ll connect to a port on your work machine. PuTTY forwards this then through the SSH tunnel. So here you need to choose a random “source port” (not the normal VNC port, if you’re also running VNC server on your work machine). This is the port that’s opened on your work machine. Then in the destination, put the LAN address of your home PC and add the normal VNC port. Like this:
Make sure you click Add.
Finally, go back to Session, type a name in the “Saved Session” field and click “Save”. You will then be able to use this configuration with plink for a fully-automatic login and creation of the SSH tunnel.
Now would be a good time to fire up this connection and check that you can login okay, without any password prompts or errors.
Using username "rob".
Authenticating with public key "Rob's SSH key"
BusyBox v1.16.1 (2014-05-29 11:29:12 CST) built-in shell (ash)
Enter ‘help’ for a list of built-in commands.
RobNAS1>
Making VNC connection
I would suggest keeping your PuTTY session open while you’re setting up and testing your VNC connection through the tunnel. This is really the easy bit, but there a couple of heffalump pits, which I’ll now warn you about. So, assuming your VNC server is running on your home PC and your SSH tunnel is up, let’s now configure the VNC viewer at the work end. Those heffalump pits:
When you’re entering the “Remote Host”, you need to specify “localhost” or “127.0.0.1”. You’re connecting to the port on your work machine. Don’t enter your work machine’s LAN ip address – PuTTY is not listening on the LAN interface, just on the local loopback interface.
You need to specify the random port you chose when configuring tunnel forwarding (5990 in my case) and you need to separate that from “localhost” with double colons. A single colon is used for a different purpose, so don’t get tripped up by this subtle semantic difference.
If you have a working VNC session at this point, congratulations! That’s the hard work out of the way.
It would be nice to automate the whole connection process. While you have your VNC session established, it is worth saving a VNC configuration file, so you can use this in a batch script. Click the VNC logo in the top left of the VNC session, then “Save session to a .vnc file”. You have the option to store the VNC password in this file, which I’ve chosen to do.
Before saving the session, you might want to tweak some optimization settings. This will really vary depending on your preferences and the speed of your connection. On this subject, this page is worth a read. I found I had the best results when using Tight encoding, with best compression (9), JPEG quality 4 and Allow CopyRect.
One batch script to rule them all
To automate the entire process, bringing up the tunnel and connecting via VNC, you might like to amend the following batch script to fit your own environment:
@echo off
start /min "SSH tunnel home" "C:\Program Files (x86)\PuTTY\plink.exe" -N -batch -load "Home with VNC tunnel"
REM Use ping to pause for 2 seconds while connection establishes
ping -n 2 localhost > NUL
"C:\Batch scripts\HomePC.vnc"
I suggest creating a shortcut to this batch file in your Start menu and setting its properties such that it starts minimised. While your SSH tunnel is up, you will have a PuTTY icon on your task bar. Don’t forget to close this after you close VNC, to terminate the tunnel link. An alternative approach is to use the free tool MyEnTunnel to ensure your SSH tunnel is always running in the background if that’s what you want. I’ll leave that up to you.
DSM Upgrades
After a DSM upgrade, you may find that your SSH config resets and you can no longer use VNC remotely. In that eventuality, log into your NAS via your LAN (as admin) and change the config back as above. You’ll also probably need to chmod busybox again.
root locked out of SSH?
For the first time in my experience, during May 2015, a DSM upgrade reset the suid bit on Busybox (meaning no more su), but didn’t reset the PermitRootLogin setting. That meant that root could not log in via SSH. Nor could you change to root (using su). If you find yourself in this position, follow these remedial steps:
Go to Control Panel > Terminal & SNMP
Check the “Enable Telnet service” box.
Click “Apply”.
Log in as root, using Telnet. You can either use PuTTY (selecting Telnet/port 23 instead of SSH/port 22) or a built-in Telnet client.
At the prompt, enter chmod 4755 /bin/busybox.
Go to Control Panel > Terminal & SNMP
Uncheck the “Enable Telnet service” box.
Click “Apply”.
Do make sure you complete the whole process; leaving Telnet enabled is a security risk, partly because passwords are sent in plain text, which is a Very Bad Thing.
Conclusion
So, what do you think? Better than LogMeIn/TeamViewer? Personally I prefer it, because I’m no longer tied to a third party service. There are obvious drawbacks (it’s harder to set up for a start! and if you firewall your incoming SSH connection, you can’t use this from absolutely anywhere on the Internet) but I like it for its benefits, including what I consider to be superior security.
Anyway, I hope you find this useful. Until next time.
[easyreview title=”Complexity rating” icon=”geek” cat1title=”Level of experience required, to follow this how-to.” cat1detail=”We’re pulling together a few sophisticated components here, but keep your eye on the ball and you’ll be okay.” cat1rating=”4″ overall=”false”]
It has been a while since I have had time to work on Laravel development or indeed write a tutorial. And since then, I have decommissioned my main web development server in favour of a Synology NAS. Dummy and I use a third party hosting service for hosting our clients’ web sites and our own. This shared hosting service comes with limitations that make it impossible to install Laravel through conventional means
So instead, I’m setting up a virtual development environment, that will run on the same laptop that I use for code development. Once development is complete, I can upload the whole thing to the shared hosting service. Getting this set up is surprisingly complicated, but once you’ve worked through all these steps, you’ll have a flexible and easy-to-use environment for developing and testing your Laravel websites. This article assumes we’re running Windows.
Components
VirtualBox enables you to run other operating systems on existing hardware, without wiping anything out. Your computer can run Windows and then through VirtualBox, it can run one or more other systems at the same time. A computer within a computer. Or in this case, a server within a computer. Download VirtualBox here and install it.
Vagrant enables the automation of much of the process of creating these “virtual computers” (usually called virtual machines). Download Vagrant here and install it.
Git for Windows. If you’re a developer, chances are you know a bit about Git, so I’ll not go into detail here. Suffice it to say that you’ll need Git for Windows for this project: here.
PuTTY comes with an SSH key pair generator, which you’ll need if you don’t already have a public/private key pair. Grab the installer here.
PHP for Windows. This is not used for powering your websites, but is used by Composer (next step). I suggest downloading the “VC11 x86 Thread Safe” zip file from here. Extract all files to C:\php. There’s no setup file for this. Rename the file php.ini-development to php.ini and remove the semicolon from the line ;extension=php_openssl.dll. Find a line containing “;extension_dir” and change it to extension_dir = "ext"
Composer for Windows. Composer is a kind of software component manager for PHP and we use it to install and set up Laravel. Download the Windows installer here and install.
SSH key pair
You’ll need an SSH key pair later in this tutorial. If you don’t already have this, generate as follows:
Start PuTTY Key Generator. (It may be called “PuTTYgen” in your Start Menu.)
I would suggest accepting the defaults of a 2048-bit SSH-2 RSA key pair. Click “Generate” and move your mouse around as directed by the program.
You can optionally give the key a passphrase. If you leave the passphrase blank, you can ultimately use your key for password-less logins. The drawback is that if your private key ever falls into the wrong hands, an intruder can use the key in the same way. Make your decision, then save the public and private keys. You should always protect your private key. If you left the passphrase blank, treat it like a plain text password. I tend to save my key pairs in a directory .ssh, under my user folder.
Use the “Save private key” button to save your public key (I call it id_rsa).
Don’t use the “Save public key” button – that produces a key that won’t work well in a Linux/Unix environment (which your virtual development box will be). Instead, copy the text from the “Key” box, under where it says “Public key for pasting into OpenSSH authorized_keys file:”. Save this into a new text file. I call my public key file “id_rsa.pub”.
Install the Laravel Installer (sounds clumsy, huh?!)
Load Git Bash.
Download the Laravel installer with this command: composer global require "laravel/installer=~1.1". This will take a few minutes, depending on the speed of your connection.
Ideally, you want the Laravel executable in your system path. On Windows 7/8, from Windows/File Explorer, right-click “My Computer”/”This PC”, then click Properties. Click Advanced System Settings. Click the Environment Variables button. Clik Path in the System variables section (lower half of the dialogue) then click Edit. At the very end of the Variable value field, add “;%APPDATA%\Composer\vendor\bin”.
Click OK as needed to save changes. Git Bash won’t have access to that new PATH variable until you’ve exited and restarted.
Create Laravel Project
All your Laravel projects will be contained and edited within your Windows file system. I use NetBeans for development and tend to keep my development sites under (e.g.): C:\Users\Geek\Documents\NetBeansProjects\Project World Domination. Create this project as follows:
Fire up Git Bash. This makes sure everything happens in the right place. The remaining commands shown below are from this shell.
Change to the directory above wherever you want the new project to be created. cd ~/NetBeansProjects
Install Laravel: laravel new "Project World Domination"
Note: if the directory “Project World Domination” already exists, this command will fail with an obscure error.
That’s it for this stage. Were you expecting something more complicated?
Laravel Homestead
Homestead is a pre-built development environment, consisting of Ubuntu, a web server, PHP, MySQL and a few other bits and bobs. It’s a place to host your Laravel websites while you’re testing them locally. Follow these steps to get it up and running:
From a Git Bash prompt, change to your user folder. Make sure this location has sufficient space for storing virtual machines (800MB+ free). cd ~
Make the Homestead Vagrant “box” available to your system. vagrant box add laravel/homestead
This downloads 800MB or so and may take a while.
Clone the Homestead repository into your user folder. git clone https://github.com/laravel/homestead.git Homestead
This should be pretty quick and results in a new Homestead folder containing various scripts and configuration items for the Homestead virtual machine.
Edit the Homestead.yaml file inside the Homestead directory. In the section “authorize”, enter the path to your public SSH key (see above). Similarly, enter the path to your private key in the “keys” section.
Vagrant can easily synchronise files between your PC and the virtual machine. Any changes in one place instantly appear in the other. So you could for example in the “folders” section, map C:\Users\Fred\Code (on your Windows machine) to /home/vagrant/code (on the virtual machine). In my case, I’ve got this: folders:
- map: ~/Documents/NetBeansProjects
to: /home/vagrant/Projects
We’re going to create a fake domain name for each project. Do something like this in the Homestead.yaml file: sites:
- map: pwd.localdev
to: /home/vagrant/Projects/Project World Domination/public
Of course, if you put “http://pwd.localdev” in your browser, it will have no idea where to go. See the next section “Acrylic DNS proxy” for the clever trick that will make this all possible.
To fire up the Homestead virtual environment, issue the command vagrant up from the Homestead directory. This can take a while and may provoke a few security popups.
Here’s the complete Homestead.yaml file:
---
ip: "192.168.10.10"
memory: 2048
cpus: 1
authorize: ~/.ssh/id_rsa.pub
keys:
- ~/.ssh/id_rsa
folders:
- map: ~/Documents/NetBeansProjects
to: /home/vagrant/Projects
sites:
- map: pwd.localdev
to: /home/vagrant/Projects/Project World Domination/public
variables:
- key: APP_ENV
value: local
At this point, you should be able to point your browser to http://127.0.0.1:8000. If you have created a Laravel project as above, and everything has gone well, you’ll see the standard Laravel “you have arrived” message. The Homestead virtual machine runs the Nginx webserver and that webserver will by default give you the first-mentioned web site if you connect by IP address.
VirtualBox is configured to forward port 8000 on your computer to port 80 (the normal port for web browsing) on the virtual machine. In most cases, you can connect directly to your virtual machine instead of via port forwarding. You’ll see in the Homestead.yaml file that the virtual machine’s IP address is set to 192.168.10.10. So generally (if there are no firewall rules in the way), you can browse to http://127.0.0.1:8000 or http://192.168.10.10 (the port number 80 is assumed, if omitted). Both should work. Personally I prefer the latter.
Acrylic DNS proxy
Of course we want to be able to host multiple development websites on this virtual machine. To do this, you need to be able to connect to the web server by DNS name (www.geekanddummy.com), not just by IP address. Many tutorials on Homestead suggest editing the native Windows hosts file, but to be honest that can be a bit of a pain. Why? Because you can’t use wildcards. So your hosts file ends up looking something like this:
(If you’re using 192.168.10.10, just replace 127.0.0.1 in the above example.) So this can go on and on, if you’re developing a load of different sites/apps on the same Vagrant box. Wouldn’t it be nice if you could just put a single line, 127.0.0.1 *.local? This simply doesn’t work in a Windows hosts file.
And this is where the Acrylic DNS proxy server comes in. It has many other great features, but the one I’m particularly interested in is the ability to deal with wildcard entries. All DNS requests go through Acrylic and any it can’t respond to, it sends out to whichever other DNS servers you configure. So it sits transparently between your computer and whatever other DNS servers you normally use.
The Acrylic website has instructions for Windows OSes – you have to configure your network to use Acrylic instead of any other DNS server. Having followed those instructions, we’re now most interested in is the Acrylic hosts file. You should have an entry in your Start menu saying “Edit Acrylic hosts file”. Click that link to open the file.
Into that file, I add a couple of lines (for both scenarios, port forwarding and direct connection, so that both work):
127.0.0.1 *.localdev
192.168.10.10 *.localdev
I prefer using *.localdev, rather than *.local for technical reasons (.local has some peculiarities).
This now means that I can now put in my Homestead.yaml file the following:
sites:
- map: site1.localdev
to: /home/vagrant/Projects/site1/public
- map: site2.localdev
to: /home/vagrant/Projects/site2/public
- map: site3.localdev
to: /home/vagrant/Projects/site3/public
- map: site4.localdev
to: /home/vagrant/Projects/site4/public
- map: site5.localdev
to: /home/vagrant/Projects/site5/public
- map: site6.localdev
to: /home/vagrant/Projects/site6/public
- map: site7.localdev
to: /home/vagrant/Projects/site7/public
and they will all work. No need to add a corresponding hosts file entry for each web site. Just create your Laravel project at each of those directories.
Managing MySQL databases
I would recommend managing your databases by running software on your laptop that communicates with the MySQL server on the virtual machine. Personally I would use MySQL Workbench, but some people find HeidiSQL easier to use. HeidiSQL can manage PostgreSQL and Microsoft SQL databases too. You can connect via a forwarded port. If you wish to connect directly to the virtual machine, you’ll need to reconfigure MySQL in the virtual machine, as follows:
Start the Git Bash prompt
Open a shell on the virtual machine by issuing the command vagrant ssh
Assuming you know how to use vi/vim, type vim /etc/my.cnf. If you’re not familiar with vim, try nano, which displays help (keystrokes) at the bottom of the terminal: nano /etc/my.cnf
Look for the line bind-address = 10.0.2.15 and change it to bind-address = *
Save my.cnf and exit the editor.
Issue the command service mysql restart
You can now connect to MySQL using the VM’s normal MySQL port. Exit the shell with Ctrl-D or exit.
Okay, okay, why go to all this trouble? I just prefer it. So sue me.
Forwarded port
Direct to VM
Port: 33060 Host: 127.0.0.1 User name: homestead Password: secret
Port: 3306 Host: 192.168.10.10 User name: homestead Password: secret
Managing your environment
Each time you change the Homestead.yaml file, run the command vagrant provision from the Homestead directory, to push the changes through to the virtual machine. And once you’ve finished your development session, run vagrant suspend, to pause the virtual machine. (vagrant up starts it again.) If you want to tear the thing apart and start again, run the command vagrant destroy followed by vagrant up.
I’ve recently taken the plunge and invested in a Synology NAS – the powerful DS214Play. Some of my colleagues have been raving about Synology’s NASes for a while and I thought it was about time I saw what all the fuss was about. This how-to article is not the place for a detailed review so suffice it to say I have been thoroughly impressed – blown away even – by the DS214Play.
The NAS is taking over duties from my aging IBM xSeries tower server. The server is a noisy, power-hungry beast and I think Mrs Geek will be quite happy to see the back of it. Life with a technofreak, eh. One of the duties to be replaced is hosting a few lightly-loaded web apps and development sites.
The NAS has fairly straightforward web hosting capabilities out of the box. Apache is available and it’s a cinch to set up a new site and provision a MySQL database. Problems arise however when you try to do anything out of the ordinary. Synology improves the NAS’s capabilities with every iteration of DSM (DiskStation Manager, the web interface), but at the time of writing, version 5.0-4482 of DSM doesn’t allow much fine tuning of your web site’s configuration.
A particular issue for anyone who works with web development frameworks (Laravel, CodeIgniter, CakePHP and the like) is that it’s really bad practice to place the framework’s code within the web root. I usually adopt the practice of putting the code in sibling directories. So, in the case of CodeIgniter for example, within a higher-level directory, you’ll have the system, application and public_html directories all in one place. Apache is looking to public_html to load the web site and then typically the index.php file will use PHP code inclusion to bootstrap the framework from directories not visible to the web server. Much better for security.
DSM doesn’t currently provide any way of customising the web root. All web sites are placed in a sub-folder directory under (e.g.) /volume1/web/. That’s it. When typing in the sub-folder field, forward and back slashes are not permitted.
This is my intended folder structure for an example CodeIgniter application:
First, use the Control Panel to create your new virtual host. I give all my virtual hosts the same sub-folder name as the domain name. Here, let’s go for test.domain.com:
Very shortly, this new sub-folder should appear within your web share. You can place your framework in this folder. If you’re not yet ready to put the framework there, at least create the folder structure for the public folder that will equate to your Apache DocumentRoot. In my example, this would involve creating public_html within the test.domain.com directory.
Next, log in to your NAS as root, using SSH. We need to edit the httpd-vhost.conf-user file.cd /etc/httpd/sites-enabled-user
vi httpd-vhost.conf-user
The VirtualHost directive for your new web site will look something like this:
ServerName test.domain.com
DocumentRoot "/var/services/web/test.domain.com"
ErrorDocument 403 "/webdefault/sample.php?status=403"
ErrorDocument 404 "/webdefault/sample.php?status=404"
ErrorDocument 500 "/webdefault/sample.php?status=500"
Change the DocumentRoot line as required:
ServerName test.domain.com
DocumentRoot "/var/services/web/test.domain.com/public_html"
ErrorDocument 403 "/webdefault/sample.php?status=403"
ErrorDocument 404 "/webdefault/sample.php?status=404"
ErrorDocument 500 "/webdefault/sample.php?status=500"
Then save the file.
UPDATE: Thanks to commenter oggan below for this suggestion – instead of the following direction, you can just issue the command httpd -k restart at the command line. There’s not a lot of information out there about causing Apache to reload its configuration files. I found that calling the RC file (/usr/syno/etc.defaults/rc.d/S97apache-sys.sh reload) didn’t actually result in the new config being used. Whatever the reason for this, you can force the new configuration to load by making some other change in the control panel for web services. For example, enable HTTPS and click Apply (you can disable again afterwards).
You will now find that Apache is using the correct web root, so you can continue to develop your web application as required.
NB: There’s a big caveat with this. Every time you make changes using the Virtual Host List in the Synology web interface, it will overwrite any changes you make to the httpd-vhost.conf-user user file. Until Synology makes this part of interface more powerful, you will need to remember to make these changes behind the scenes every time you add or remove a virtual host, for all hosts affected.
One of the most interesting areas of development in the consumer technology industry is wearable tech. The segment is in its infancy and no one quite knows whether it will prove to turn out damp squibs or cash cows (if you’ll pardon the mixed metaphors). Top manufacturers are jostling for space with arguably premature “me too” gadgets that amount to little more than technology previews. There are even technologyexpos dedicated to this new sector.
When Samsung brought out its Galaxy Gear, I thought “we might have something here”. But the price was all wrong. I know the company can’t expect to ship many units at this stage in the game, but the opening price of £300 for a bleeding-edge, partially-formed lifestyle accessory kept all but the most dedicated technophiles firmly at bay. The Gear has failed to capture the public’s imagination and I think I know why. Putting aside the unconvincing claims that the Gear “connects seamlessly with your Samsung smartphone to make life easier on the go“, there’s one very big problem with this, and almost all other smart watches: it’s ugly.
Watches long since ceased to be simply pedestrian tools that tell you the time. They are fashion accessories. They express our individuality. Who wants to walk around toting one of these half-baked forearm carbuncles?
So I noted with interest Motorola’s announcement yesterday that the company is getting ready to launch the new round-faced, Android Wear-powered Moto 360.
Somewhat like Apple. Motorola has a reputation for adding its own design twist to everyday technology. I have a DECT cordless answering machine from Motorola, chosen largely on the strength of its looks, in a market where most of these devices have very similar capabilities.
Motorola’s previous attempt at a smart watch, the MOTOACTV, is frankly no supermodel. But if the MOTOACTV is the acne-ridden, orthodontic braces-sporting ugly duckling, the Moto 360 is the fully grown, airbrushed to perfection swan.
Just look at it. Now we’re onto something. Now we’ve got a watch where I wouldn’t have to spend all day persuading myself it’s pretty. Quite the contrary. I’m not that bowled over by the leather strap version, but in metal bracelet guise, I think we’re looking at a genuine designer item.
Pricing is yet to be announced, and no doubt it will be a long time before it’s stocked in UK stores. But this Geek hazards a guess that it will be worth the wait. Until it’s available, the only smart watch that comes close in terms of style in my humble opinion is the Pebble Steel, which is a little hard to come by, this side of the Atlantic.
As you may know, here at Geek & Dummy, we’re building up a free library of sound effects, which you’re welcome to use in your own projects. For the best results you really need to use decent quality recording equipment – a microphone attached to your computer will just pick up lots of unwanted noise. We’ve achieved really great results with the Tascam DR-05, which for the price (about £80) packs an amazing sound quality into an easily pocketable format. It helps to pair this with a decent SD card – see our recent MicroSD card head-to-head to see what’s the best value for money in that department.
When recording, make sure your audio sample contains about 2 seconds of ambient noise. This enables us to profile the ambient noise before we remove it from the sample.
Run Audacity. If you don’t have this incredible (but dull-looking!) free software, pick it up here.
Open the sample (File –> Open).
Using the selection tool, select your couple of seconds of ambient noise – this “silence” should look virtually flat in the display.
On the menu, choose Effect –> Noise Removal.
Click “Get Noise Profile”.
Press Ctrl-A to select the whole sample.
On the menu, choose Effect –> Noise Removal again.
This time, click OK. The default settings are probably okay, though you can play with them to achieve different results.
Listen to the sound sample. Sometimes noise removal can result in artificial sounding samples. If that’s the case, you can take a noise profile from a different quiet section of the sample and try again, or try with different parameters.
You can now remove silent sections of the audio as required. You can either select the sections and press the delete key, or use the Truncate Silence feature in Audacity (Effect –> Truncate Silence) to do it automatically. Use the zoom tool for precision removal of short sections of silence.
We “normalize” the sample to take it to the maximum volume possible without causing distortion. Before normalizing, you may want to find and delete any unwanted loud sections from the sample, in order to improve the effect of normalization.
To normalize the sample, ensure it is all either selected or deselected. Then choose Effect –> Normalize. Again there are some configurable settings here.
Listen to the sample again to make sure you’re happy with the results. All changes can be undone with Ctrl-Z.
Anyone who’s pulled a computer apart will know how much dust, crud and miniature wildlife can take up residence within your machine’s delicate circuitry. This build-up is bad for your computer. In particular, it makes fans and heat sinks less efficient and causes everything to warm up.
At best, this is a nuisance. Many power-regulating computers will simply slow down to allow the system to cool off. At worst, though, the excess heat in a power supply for example can set your computer – and your house or office – on fire. Clearly this is not A Good Thing.
Periodically then, it’s a good idea to take an “air duster” to your computer’s innards. Air dusters usually consist of cans of compressed air, with a straw-like nozzle to direct the air flow. They’re designed to create enough pressure to remove dirt but not so much as to cause damage.
Where I work, we used to get through a ton of these cans of air. The problem was, just when you really needed the air duster, you’d look on the shelf and there’d just be an empty can. Harrumph. How inconsiderate.
The Rocket-air has a thick flexible rubber body and a solid plastic nozzle. Squeeze firmly and you get a blast of air not dissimilar to that from a can of compressed air. It’s not hard work to operate and of course the best part is that you have an unlimited supply of air at your disposal (at least until some thieving, envious toe-rag runs off with it).
It’s theoretically available in a few different colours. I’ve only seen it in the UK in the red/black regalia, not that it matters: I didn’t buy it for its looks. Mind you, as looks go, it’s a fairly funky tool and was surprisingly quite a conversation piece when it first arrived.
Speaking of design, you have to love the attention to detail here. Giotto’s makes camera equipment, so the Rocket-air’s primary function is to blow dust from delicate camera lenses (the fact that we can bend it to other uses is a big bonus). On the opposite side to the nozzle, there’s a fast air inlet valve. This means that when you release the blower, rather than sucking dust back in through its nozzle, it pulls in (hopefully) clean air from the other side.
Oh, and the “rocket fins” on the base of the blower enable it to stand up stably. Not massively important, just a nice bit of design. On two of these fins there are holes punched so that you can thread a lanyard through. Great for hanging it from your neck should you be so inclined. People will give you funny looks though.
So it’s well made, durable, moderately attractive, great at its job – there’s got to be a catch, right? The price. Amazon has it on sale for £8.99 at the moment. I don’t know about you, but my first thought was, “That’s a bit expensive for a glorified executive stress toy.” But then if you think about it, you can’t really buy a can of compressed air for less than £3 or £4. So the Rocket-air pays for itself pretty quickly – I would expect it to last as long as a hundred cans of compressed air. When you put it that way, it’s a bit of a bargain.
This slideshow requires JavaScript.
[easyreview title=”Geek rating” icon=”geek” cat1title=”Ease of use” cat1detail=”Short of poking it in your own eye, I’m not sure you can get this wrong.” cat1rating=”5″ cat2title=”Features” cat2detail=”Does everything you expect of it. I suspect it could be made slightly more powerful, but otherwise, there’s little to criticise.” cat2rating=”4.5″ cat3title=”Value for money” cat3detail=”When you compare it with the alternatives, it’s pretty near the cheapest solution to our dusty problems.” cat3rating=”4.5″ cat4title=”Build Quality” cat4detail=”Great attention to detail. Some slightly annoying slivers of rubber haven’t quite been removed after it came out of its mould. But otherwise, really well made. Feels like it will last forever – or at least until I retire (which is much the same thing).” cat4rating=”4.5″ summary=”Great solution to the problem of safely cleaning dust and dirt out of computers and fans. As a bonus, you can use it on your camera too. Can’t really recommend it any more highly.”]
There’s a lot of information and misinformation concerning SD technology, on the Internet. I’ve read a few conflicting theories about how many different companies actually manufacture SD cards and how many re-badge other companies’ kit. For your average consumer (and by this, I mean Dummy) though, this is all a bit of a red herring. Dummy wants to know (a) which card is fastest and (b) which card is best value for money. End of.
In order to separate the wheat from the chaff, we’ve picked 16GB Class 10 MicroSD cards (with full size adapters) from the top five brands out there: SanDisk, Transcend, Kingston, Toshiba and Samsung. We’re not too concerned with who made ’em; we just want to see which is the best package overall. To that end, we’ve benchmarked the cards in two different environments: firstly using a MicroSD card USB reader on a Windows PC and secondly on an Android mobile device. This gives us a really good idea of how these five cards perform in real-world scenarios.
If you can’t be bothered with all the stats and just want the summary (and we can’t blame you!), click here.
Our mobile device test rig consisted of the brilliant Samsung Galaxy Note 2 (replaced by the even more incredible Note 3). Using this phone we ran the handy app A1 SD Bench. It’s less comprehensive than CrystalDiskMark, but gives you a good feel for how the card performs in one of its most likely use cases – plugged into a smartphone or tablet.
The Tests
For our tests, we ran five passes of the benchmarks on each card. When a test run contained anomalous results (which could be down to processor blips or other irrelevant causes), we discarded the test and ran again. We then averaged the scores from the five clean passes. We think this gives us a pretty bulletproof set of scores.
We ran three distinct batteries of tests:
5 passes of CrystalDiskMark with the file size set to 1000MB
5 passes of CrystalDiskMark with the file size set to 50MB
5 passes of A1 SD Bench
The results are summarised in this image (click to enlarge):
This is worthy of some explanation! The bottom row of the table shows the “standard deviation”. You don’t need to understand statistics to get the point: if there’s not a lot of difference between the cards, you’ll see a small figure on this row. So for example in the sequential read tests, there’s very little difference between the cards (standard deviations of 0.12 and 0.18). In the case of a 50MB file written randomly in 512k chunks, there’s a huge variation – 5.97. If you look more closely at the column, you’ll see that the Samsung card clocked in at 15.2MB/s in this test, which is very good, while the Toshiba managed a paltry 1.3MB/s.
The bar chart overlays are fairly easy to follow, we hope. For each column, the card that has the biggest bar achieved the best speeds. This is most noticeable on columns with the largest deviation (512K random writes).
The price column is colour-coded going from red to green; expensive to cheap. “Expensive” is a relative term – there’s only £3.50 difference between the most expensive and least expensive cards here. Mind you, that means that the pricy SanDisk is over 40% more expensive than the modest Toshiba.
Let’s look at the SanDisk card for a moment. It’s considerably more expensive than the cheaper cards on test, so what do you get for your money? Well it actually performs worst on one of the tests (1000MB sequential write). If you’re working with video, that is going to be noticeable. The story’s better with random writes, but that seems to us to be a fairly “edge” scenario. It also does worst when being written to in an Android environment. All in all, not good. It certainly hasn’t justified its high price ticket. We were expecting a lot better from such a well known and respected manufacturer.
With generally narrow deviations, it’s hard to pick out an outstanding card, but if you can managed to stare at the figures a bit without your eyes glazing over, we do think a hero emerges.
Look at that Samsung card. Best at the 1000M sequential read. Best at the 1000MB sequential write. Second best at the 1000MB/512K random write and leading the pack for the 50MB/512K random write. In almost every test, it achieves consistently good results. It’s top dog when it comes to writing data and no slouch at reading it. All this and it’s the second cheapest.
Conclusion
If you’d like to mine the data we collected, you can download it here as a zipped Excel spreadsheet – this contains the raw data from all our tests – no less than 450 data points! Whatever you do, we think you’ll agree with us that in this class, the best value MicroSD card by far is the Samsung 16GB Class 10 MicroSD. There isn’t even a runner up in this contest; the remaining contenders are either too slow or too expensive.
This website uses cookies to improve your experience. I will assume you're ok with this, but you can opt-out if you wish.AcceptRejectRead More
Privacy & Cookies Policy
Privacy Overview
This website uses cookies to improve your experience while you navigate through the website. Out of these, the cookies that are categorized as necessary are stored on your browser as they are essential for the working of basic functionalities of the website. We also use third-party cookies that help us analyze and understand how you use this website. These cookies will be stored in your browser only with your consent. You also have the option to opt-out of these cookies. But opting out of some of these cookies may affect your browsing experience.
Necessary cookies are absolutely essential for the website to function properly. This category only includes cookies that ensures basic functionalities and security features of the website. These cookies do not store any personal information.
Any cookies that may not be particularly necessary for the website to function and is used specifically to collect user personal data via analytics, ads, other embedded contents are termed as non-necessary cookies. It is mandatory to procure user consent prior to running these cookies on your website.