Review: Portable Bluetooth speakers for under £30; Tenvis vs. Elf vs. Bolse vs. Anker

So here’s a market that’s exploded in recent months: Bluetooth speakers. In particular portable Bluetooth speakers. Check it out – there are thousands of them on Amazon alone – and it’s the same on eBay.

Choosing something decent from that vast array of choices is no mean feat. We started out with a basic task: set four of the best sub-£30 speakers against each other, assess them as aesthetically and as scientifically as we can (not that scientific – we’re a Geek and a Dummy, not high-end audiophiles!) and come up with a winner. Not easy, as you’ll see!

The four contenders

Four Bluetooth speakersFor this review, we’ve picked four of the highest-rated speakers on the market for around £30 (at the time of review – these prices can be quite volatile). Starting clockwise from the top left of the picture, with prices at the time of our purchase, they are:

All four speakers have the following features in common:

  • Bluetooth (duh)
  • Microphone for hands-free voice calls
  • Aux-in socket, for playing music via cable
  • USB and audio cables included in the box

We’;; now look at the speakers in turn and see what each has to offer – or not – besides the basics.

Elf WS-701

Elf01Let’s start with the speaker that was (just) the cheapest of the four: “The Elf”. The Elf is a pretty anonymous black box. All these speakers have a matt rubberized finish; in the case of the Elf and its brother (more on that shortly), the finish is a little on the cheap side.

It has a full complement of six buttons: track skip (forwards/backwards), volume up/down, play/pause and call answer. This suits me far better than the minimalist single button approach. I don’t want to memorize how many seconds I need to hold a button or how many presses correspond to each particular action.

Pairing with my Android phone was simple and easy. The speaker confirms connection in an excessively loud female American voice saying “Connected”. Not very subtle when you’re trying to set up some quiet tunes in the morning. And there’s something about it that’s a little… cheesy.

Though we didn’t test this exhaustively, the speaker seemed to live up to the claimed charge time of 3-4 hours and playback time of 10-12. It was in that ballpark. And it was about the loudest speaker in this group test – we could turn this one up the most, before distortion crept in. Bluetooth range was pretty reasonable – about 15 metres before the connection started to drop.

All the speakers can be used as hands-free speaker phones, and this one was the best of the bunch. Good, clear call quality, and it handled the problems of two-way audio (avoiding feedback) very competently. I’m not sure that’s why people buy speakers like this, but in a pinch, you can use this as a conference phone with little difficulty.

The Elf was the heaviest and the cheapest (at the time of purchase) speaker in this group and of the four, it’s the one I kept personally. It didn’t have the widest frequency response, but it is more than adequate – good, in fact for the use I now put it to daily: music in the shower.

Incidentally, if you’re looking for this speaker, just use the “WS-701” search term – it’s currently available under a different brand name, “Coppertech”.

This slideshow requires JavaScript.

Bolse SZ-801

Bolse01It’s fair to say that Chinese technology companies aren’t renowned for respecting the intellectual property rights of other companies. I mean, “Bolse”. Come on guys. Next you’ll be calling yourselves “Microsloft”.

After the slightly comical name, the next thing you notice about this speaker is how similar it is to the Elf. Virtually identical in appearance, in fact. The model names are similar too – “WS-701” vs. “SZ-801”. In fact, the only major difference between this speaker and the slightly cheaper Elf, is that the Bolse has NFC, which we’ll come to in a second.

Here at Geek & Dummy, we don’t pretend to be technology insiders. We really are just a regular Geek and a regular Dummy. So we’re just going to conclude what everyone else knows is blatantly obvious: the two speakers came out of the same factory. The Bolse is a later or upgraded version of the Elf. Who knows if “Bolse” and “Elf” even exist as trading entities.

Given their similarity (the grille pattern is very slightly different), you’ll not be surprised to read that they fared almost identically in our tests. I found the NFC to be little more than a gimmick. Place your NFC-equipped Android phone (sorry, no iLove here, apparently!) and Bluetooth is automatically switched on and the phone and speaker automatically paired. Given that pairing and switching on Bluetooth aren’t exactly onerous tasks, I’m not sure I’d say this feature is worth the extra £5 you pay for it.

Again, in comparison to the Elf, playback time is down to 8-10 hours (from 10-12). The box claims it is a more powerful speaker (12W RMS vs. 10W) but in our tests, it distorted earlier than the Elf, indicating slightly poorer speaker construction. And hands-free call quality wasn’t bad, but slightly worse than the Elf, sounding “fuzzy” on the other end of the call.

The Bolse comes with a horrible drawstring bag, that you probably wouldn’t want to use for storage. The included audio cable is a little better than that included with the Elf.

In short, when placing the Elf WS-701 alongside the Bolse SZ-801, we’d only choose the Bolse if it were the same price as the Elf.

This slideshow requires JavaScript.

Tecevo T4 Soundbox

Tecevo01This is the lightest of the four speakers on test, weighing in at just 270g. It has just three buttons (forward. back and pause/play/answer). In our opinion, it’s the ugliest speaker on offer here today and it has the poorest battery of the set, at just 800mAh.

The Tecevo does have a few unique tricks up its sleeve though. First, it does come in other colours than black. Second, it has phenomenal Bluetooth range: 90 feet (27 metres) – by far the best range of any of these speakers. This far exceeds the typical range of Bluetooth devices.

And finally, which is perhaps most interesting, the Tecevo has an audio output socket (n addition to the input socket). This doesn’t mean you can daisy-chain speakers – the sound cuts out when you plug a lead into the output socket), but it does mean you can effectively use this speaker to Bluetooth-enable any other music system. Connect it to your ancient-but-good hifi, and stream tunes from your phone. Nice. Make sure it’s plugged into a USB charger though – the battery will give up the ghost before any of the competition.

Not that it matters much, but you wouldn’t want to use this speaker as a hands-free device. Calls sound like you’re in a tunnel, with lots of echo.

This slideshow requires JavaScript.

Anker MP141

Anker01This just leaves the Anker. Anker is making a good fist of emerging as a credible purveyor of gadgets, in a very crowded marketplace. We’ve seen a few items from Anker now, and they do stand out in the crowd: manuals that read like the writer does actually speak English, well-packaged, well-finished and with good warranties. The warranty on this speaker for example, is 18 months, which is not bad at all.

The Anker is a different form factor to the others. It’s square, rather than rectangular and houses a single large speaker, rather than the twin speakers in the others. It’s reassuringly chunky and the soft touch rubber finish has the highest quality feel of the speakers in this group.

The Anker has the longest claimed playback time, at 15-20 hours. We can well believe it, given it has the largest capacity battery (2100mAh) and takes the longest to charge (5 hours). The larger battery contributes to the general feeling of solidity. Without doubt it stands out for the quality of its construction.

It’s the most up to date speaker too, following version 4.0 of the Bluetooth specification. It suffers with range though, dropping out at just 10 metres (33 feet). It not the loudest either, and its bass response, though adequate, isn’t quite as good as the others. It’s also not great as a hands-free speaker.

This slideshow requires JavaScript.

Conclusion

So, which would we choose? If quality and aesthetics are most important to you, the Anker is the superior choice. But for us, the Elf is the clear winner, with its all-round abilities. And for a speaker this size, the sound quality is more than adequate. For sure, it’s no Bose, but then it’s a fraction of the price. And you wouldn’t want to take your expensive Bose into the bathroom with you – whereas with this, no problem. And helpfully, at the time we purchased, it was the cheapest of them all. Job done: buy the Elf (a.k.a. Coppertech).

Elf05

If you’re interested in all the data we captured and used for this review, here’s a spreadsheet you might enjoy. For the Geeks among us. 🙂

Review: TP-Link Powerline TL-WPA4220T kit

TP-Link TL-WPA4220T kitI’ve been using HomePlug AV adapters at home for years. These excellent devices turn your ring mains into a LAN, incredibly routing network data over your electrical cabling. As long as all your sockets are on the same phase, you can put a network socket wherever you have an electrical socket.

This is excellent news if you need to get Internet to some remote corner of your house, where your wifi doesn’t reach, but don’t want to trash the joint, installing network cabling. Plug one HomePlug device into a socket near your router and another wherever else you need it. That’s pretty much job done. Some of these devices can transmit data at up to 500Mbps, which is pretty impressive.

After six years of constant use, my old ZyXEL PLA-401s started becoming less and less reliable. I had four of these – one by the downstairs router, one upstairs plumbed into a separate wireless access point, another in the garage, likewise and finally one in the loft for my servers (which I’ve since retired, in favour of an excellent all-singing, all-dancing Synology DS214Play). Over time, the ZyXELs along with the two WAPs have developed some idiosyncrasies, needing occasional restarts. They were running a little hot and that’s never a good thing. Well they’ve provided good service, so nothing lost.

Ideally, I wanted to retire all the existing HomePlug adapters, plus the two wireless access points – and to do that in a cost-effective manner. My search brought me to TP-Link’s triple pack, the memorably named “TL-WPA4220T Kit“. £80 gets you three Powerline adapters, two of which are also wireless access points, with twin ethernet ports. Turns out this exactly matched my requirements, since I no longer needed a device in my loft. One device to plug into my wireless broadband router, one to provide wireless upstairs (and connect to two adjoining cabled devices) and one for the garage to provide wireless access in our garden.

TP-Link is one of the more reputable electronics manufacturers to send us gadgets out of China. Still, I’ve had a few run-ins with Chinese electronics, especially relatively cheap devices like these Powerline adapters, so I wasn’t expecting things to be entirely straightforward.

My first impression was favourable. My old ZyXEL adapters look clunky and old-fashioned next to these sleek, shiny gizmos. Clearly over the last few years, like all technology, the adapters have shrunk; and the pressure of certain design-led technology manufacturers has persuaded others to give aesthetics at least a token consideration prior to launch. There are two larger white adapters in the box (separately available as TL-WPA4220s) and a smaller grey-faced TL-PA4010. The smaller adapter is not wifi-enabled – you connect this one to your wireless router and it “introduces” your Internet connection to all other adapters (via the mains).

I read the promises of the simple “plug & play” (oh how nineties!) setup with a degree of skepticism. The poorly translated manuals did not instill confidence (though I’ve seen far worse). That said, these are consumer devices and I’m a Geek, so you’d think it wouldn’t be insurmountable. 😀

First, the problems. I could not get WPS to work. The idea is that you press the WPS button on your router, then the “wifi clone” button on the Powerline WAPs, and the network settings are automatically copied. I tried this every which way. You’ll appreciate that I’m no Dummy when it comes to these things, but it just wasn’t happening. Possibly my DrayTek router speaks a slightly different dialect of WPS. The TP-Links couldn’t understand the accent.

Another problem came from the fact that I attempted to set up the TP-Link adapters while the old ZyXELs were still installed. I half-expected that this would cause trouble. I was right. Ah well. A couple of factory resets later and with the ZyXELs unplugged we were working much better.

One more problem – though the TP-Links came with a CD, my laptop doesn’t possess a CD drive. I proxied the files via another CD-equipped device, only to find that the software included on the disk didn’t really work well under Windows 8. Doh!

Never mind. If you find yourself in this situation, do what I did: head over to TP-Link’s download site and grab everything you need from there.

Happily, once I had the correct software, I was able (easily) to log onto the wireless-enabled TP-Links, and enter all the wifi settings (twice, one for each WAP). With this all done, with the three devices talking to each other and the two wireless-enabled devices offering the same authentication requirements as my router, everything is now working brilliantly.

A great bonus for me, is the huge signal you get from the WAPs. I’d already upgraded my DrayTek router with larger antennae, but the TP-Link WAPs just blew it away. Twice the signal strength and much better connectivity all round my house (and up my garden). The drawback is that someone can now wardrive my network from the next county, but hey, that’s small price to pay for speed, right?

[easyreview title=”Geek rating” icon=”geek” cat1title=”Ease of use” cat1detail=”Some problems with WPS, but not too difficult to rectify.” cat1rating=”4″ cat2title=”Features” cat2detail=”High speed, extra LAN ports, WAPs included in the kit, MAC filtering if you want it – all in all, pretty impressive feature set.” cat2rating=”4.5″ cat3title=”Value for money” cat3detail=”Basically no kit that I’ve seen beats this on value.” cat3rating=”5″ cat4title=”Build Quality” cat4detail=”Looks really well built. Solid plastics. Reasonably attractive as these things go and not too big.” cat4rating=”4.5″ summary=”For me, this kit was great value for money. I wouldn’t have bought it otherwise. I have no hesitation recommending it.”]

HP WSD printer port type screws up Windows Server 2012 domain controllers!

No response from serverI don’t think this can be that uncommon a scenario: a Windows Server 2008 R2 domain, with mainly HP printers. New domain controller added (at new site), this time running Windows Server 2012 R2; HP printers there too.

This was the position I found myself in earlier this year. On paper, there’s nothing unusual about this set-up. Adding new 2012 DCs and standard HP workgroup printers shouldn’t be a problem. That’s what we all thought.

Until the domain controller started becoming non-responsive.

Cue many, many hours on TechNet and various other similar sites, chasing down what I became increasingly sure must be some latent fundamental corruption in Active Directory (horrors!), revealed only by the introduction of the newer o/s. There were many intermediate hypotheses. At one point, we thought maybe it was because we were running a single DC (and it was lonely). Or that the DC was not powerful enough for its file serving and DFS replication duties. So I provisioned a second DC. Ultimately I failed all services over to that because the first DC was needing increasingly frequent reboots.

And then the second domain controller developed the same symptom.

Apart from the intermittent loss of replication and certain other domain duties, the most obvious symptom was that the domain controller could no longer initiate DNS queries from a command prompt. Regardless of which DNS server you queried. Observe:

C:\Users\rob>nslookup bbc.com
Server: UnKnown
Address: 192.168.1.1

*** UnKnown can’t find bbc.com: No response from server

C:\Users\rob>nslookup bbc.com 192.168.1.2
Server: UnKnown
Address: 192.168.1.2

*** UnKnown can’t find bbc.com: No response from server

C:\Users\rob>nslookup bbc.com 8.8.8.8
Server: UnKnown
Address: 8.8.8.8


*** UnKnown can't find bbc.com: No response from server

Bonkers, right? Half the time, restarting AD services (which in turn restarts file replication, Kerberos KDC, intersite messaging and DNS) brought things back to life. Half the time it didn’t, and a reboot was needed. Even more bonkers, querying the DNS server on the failing domain controller worked, from any other machine. DNS server was working, but the resolver wasn’t (so it seemed).

I couldn’t figure it out. Fed up, I turned to a different gremlin – something I’d coincidentally noticed in the System event log a couple of weeks back.

Ephemeral port exhaustion

Event ID 4266, with the ominous message “A request to allocate an ephemeral port number from the global UDP port space has failed due to all such ports being in use.”

What the blazes is an ephemeral port? I’m just a lowly Enterprise Architect. Don’t come at me with your networking mumbo jumbo.

Oh wait, hang on a minute. Out of UDP ports? DNS, that’s UDP, right?

With the penny slowly dropping, I turned back to the command line. netstat -anob lists all current TCP/IP connections, including the name of the executable (if any) associated to the connection. When I dumped this to a file I quickly noticed literally hundreds of lines like this:

TCP 0.0.0.0:64838 0.0.0.0:0 LISTENING 4244
[spoolsv.exe]

As it happened, this bit of research coincided with the domain controller being in its crippled state. So I restarted the Print Spooler service, experimentally. Lo and behold, the problem goes away. Now we’re getting somewhere.

Clearly something in the printer subsystem is grabbing lots of ports. Another bell rang – I recalled when installing printers on these new domain controllers that instead of TCP/IP ports, I ended up with WSD ports.

WSD ports

What on earth is a WSD port?! (Etc.)

So these WSD ports are a bit like the Bonjour service, enabling computers to discover services advertised on the network. Not at all relevant to a typical Active Directory managed workspace, where printers are deployed through Group Policy. WSD ports (technically monitors, not ports) are however the default for many new printer installations, in Windows 8 and Server 2012. And as far as I can tell, they have no place in an enterprise environment.

Anyway, to cut a long story short (no, I didn’t did I, this is still a long story, sorry!), I changed all the WSD ports to TCP/IP ports. The problem has gone away. Just like that.

I spent countless hours trying to fix these domain controllers. I’m now off to brick the windows* at Microsoft and HP corporate headquarters.

Hope this saves someone somewhere the same pain I experienced.

Peace out.

*Joke

Jeremiah 29:11 – a verse out of context?

When it comes to singling out and “claiming” verses of scripture, proponents of the Word of Faith movement don’t have a monopoly. From Conservative to Charismatic, Evangelical to Eastern Orthodox, Christians love clinging onto comforting extracts from the Word of God. And this is right and commendable. Continue reading “Jeremiah 29:11 – a verse out of context?”

News: GD1- The world’s best URL shortener?!

Have you ever needed to write down a web site address, or worse – type it into a text message? And it’s something like http://www.someboguswebsite.com/this/is/a/painfully/long/url. Tedious, right? Or have you needed to paste an address into a tweet, but you’ve come up against the maximum character limit?

https://www.flickr.com/photos/29233640@N07/15626535877/ By Robert Couse-Baker
Micro Minibus by Robert Couse-Baker
In the case of Twitter, chances are you’ve used Twitter’s URL shortener of choice, Bitly. In this case, the awful, long URL becomes http://bit.ly/1xXCa5h – 21 characters instead of 60. Quite a trick. So you use the shortened URL for convenience, pass it on via social media or SMS and this is magically transformed into the original URL, upon use.

Recently in my very geeky news feed, I came across Polr, a self-hosted URL shortener. What a wheeze! Grab yourself a suitable domain, and you can poke your tongue out at Twitter, Google and the like, with all their evil data-mining ways.

It was surprisingly easy to get up and running with Polr, in our case using a virtual server hosted with Amazon. We bought a nifty little domain, gd1.uk and off we go! To be honest, the most time-consuming part was tracking down a short domain name – there aren’t many about.

All this is a roundabout way of saying, please feel free to use our brand shiny new URL shortener. Because it’s so young, the URLs generated really are very short. https://geekanddummy.com for example is now http://gd1.uk/1 – just 15 of your precious characters.

Yes, we’ve had to put adverts on it. Server hosting ain’t free. But we won’t charge you for using the service and we have no wicked designs on your data. Promise.

So go to it. Bookmark gd1.uk and enjoy the majesty, the awe of the world’s best* URL shortener. GD1 – it’s a good one.

Happy tweeting/SMSing.

Geek

*Well we think it is, anyway.

How-to: Ultra-secure VNC to computer on home network via Synology NAS using SSH tunnel

Introduction

Copyright Jösé https://www.flickr.com/people/raveneye/
Copyright Jösé

As you may know from other articles here, I have a Synology DS214Play NAS, and I’m a big fan. It can do so much for so little money. Well, today I’m going to show you a new trick – it will work for most Synology models.

There are a few different ways of remotely connecting to and controlling computers on your home network. LogMeIn used to be a big favourite, until they discontinued the free version. TeamViewer is really the next best thing, but I find it pretty slow and erratic in operation. It’s also not free for commercial use, whereas the system I describe here is completely free.

Many people extol the virtues of VNC, but it does have a big drawback in terms of security, with various parts of the communication being transmitted unencrypted over the network. That’s obviously a bit of a no-no.

The solution is to set up a secure SSH tunnel first. Don’t worry if you don’t know what that means. Just think about this metaphor: imagine you had your own private tunnel, from your home to your office, with locked gates at either end. There are no other exits from this tunnel. So no one can peek into it to see what traffic (cars) is coming and going. An SSH tunnel is quite like that. You pass your VNC “traffic” (data) through this tunnel and it is then inaccessible to any prying eyes.

Assumptions

This guide assumes the following things:

  1. You have a Synology NAS, with users and file storage already configured.
  2. You have at home a Windows computer that is left switched on and connected to your home network while you’re off-site.
  3. Your home PC has a static IP address (or a DHCP reservation).
  4. You have some means of knowing your home’s IP address. In my case, my ISP has given me a static IP address, but you can use something like noip.com if you’re on a dynamic address. (Full instructions are available at that link.)
  5. You can redirect ports on your home router and ideally add firewall rules.
  6. You are able to use vi/vim. Sorry, but that knowledge is really beyond the scope of this tutorial, and you do need to use vi to edit config files on your NAS.
  7. You have a public/private key pair. If you’re not sure what that means, read this.

Install VNC

There are a few different implementations of VNC. I prefer TightVNC for various reasons – in particular it has a built-in file transfer module.

When installing TightVNC on your home PC, make sure you enable at least the “TightVNC Server” option.

TightVNC_01

Check all the boxes in the “Select Additional Tasks” window.

TightVNC_02

You will be prompted to create “Remote Access” and “Administrative” passwords. You should definitely set the remote access password, otherwise anyone with access to your home network (e.g. someone who might have cracked your wireless password) could easily gain access to your PC.

TightVNC_03

At work, you’ll just need to install the viewer component.

Configure Synology SSH server

Within Synology DiskStation Manager, enable the SSH server. In DSM 5, this option is found at Control Panel > System > Terminal & SNMP > Terminal.

DSM_01

I urge you not to enable Telnet, unless you really understand the risks.

Next, login to your NAS as root, using SSH. Normally I would use PuTTY for this purpose.

You’ll be creating an SSH tunnel using your normal day-to-day Synology user. (You don’t normally connect using admin do you? Do you?!) Use vi to edit /etc/passwd. Find the line with your user name and change /sbin/nologin to /bin/sh. E.g.:

rob:x:1026:100:Rob:/var/services/homes/rob:/bin/sh

Save the changes.

As part of this process, we are going to make it impossible for root to log in. This is a security advantage. Instead if you need root permissions, you’ll log in as an ordinary user and use “su” to escalate privileges. su doesn’t work by default. You need to add setuid to the busybox binary. If you don’t know what that means, don’t worry. If you do know what this means and it causes you concern, let me just say that according to my tests, busybox has been built in a way that does not allow users to bypass security via the setuid flag. So, run this command:

chmod 4755 /bin/busybox

Please don’t proceed until you’ve done this bit, otherwise you can lock root out of your NAS.

Next, we need to edit the configuration of SSH. We have to uncomment some lines (that normally being with #) and change the default values. So use vi to edit /etc/ssh/sshd_config. The values you need to change should end up looking like this, with no # at the start of the lines:

AllowTcpForwarding yes
LoginGraceTime 5m
MaxAuthTries 6
PubkeyAuthentication yes
AuthorizedKeysFile .ssh/authorized_keys

In brief these changes do the following, line by line:

  1. Allow using SSH to go from the SSH server (the NAS box) to another machine (e.g. your home PC)
  2. If you foul up your login password loads of times, restrict further attempts for 5 minutes.
  3. Give you 6 attempts before forcing you to wait to retry your logon.
  4. Allow authentication using a public/private key pair (which can enable password-less logons).
  5. Point the SSH daemon to the location of the list of authorized keys – this is relative to an individual user’s home directory.

Having saved these changes, you can force SSH to load the new configuration by uttering the following somewhat convoluted and slightly OCD incantation (OCD, because I hate leaving empty nohup.out files all over the place). We use nohup, because nine times out of ten this stops your SSH connection from dropping while the SSH daemon reloads:

nohup kill -HUP `cat /var/run/sshd.pid`; rm nohup.out &

SSH keys

You need to have a public/private SSH key pair. I’ve written about creating these elsewhere. Make sure you keep your private key safely protected. This is particularly important if, like me, you use an empty key phrase, enabling you to log on without a password.

In your home directory on the Synology server, create (if it doesn’t already exist) a directory, “.ssh”. You may need to enable the display of hidden files/folders, if you’re doing this from Windows.

Within the .ssh directory, create a new file “authorized_keys”, and paste in your public key. The file will then normally have contents similar to this:

ssh-rsa AAAAB3NzaC1yc2EAAAABJQAAAQEArLX5FlwhHJankhIoZcsIEgmHkOtsSR6eJINGgb4N3/4XQAHpmYPhlAy6Hg2hH8VqNLXgkVia+yMDaDOFQKXX6Ue+hOQt7Q5zB3W1NgVCsyIn9JBu3u6R8rDPBma248DhQ3yfac1iEZWa+3BrHaIM2dLVGu99C5z3Kh1NhDB83xetq08bHayzv39wuwZUZOohDzsCK29ZaEYU9ZctN/RZR4rW7A7odJbbgqG82IZXhUhiam2utpjszLJ+sMOw6z7tcpgnF5CLDys2xvE6ekLjEPA2b9KkrU6e+ILXM85s7/HP9aTkTwFyyBcPAvmO7i0xYyotu58DKf++nM2ZtpNBPQ== Rob's SSH key

This is all on a single line. For RSA keys, the line must begin ssh-rsa.

SSH is very fussy about file permissions. Log in to the NAS as root and then su to your normal user (e.g. su rob). Make sure permissions for these files are set correctly with the following commands:

chmod 755 ~
chmod 700 ~/.ssh
chmod 600 ~/.ssh/authorized_keys

If you encounter any errors at this point, make sure you fix them before proceeding. Now test your SSH login. If it works and you can also su to root, you can now safely set the final two settings in sshd_config:

PermitRootLogin no
PasswordAuthentication no

The effect of these:

  • Disallow direct logging in by root.
  • Disallow ordinary password-based logging in.

Reload SSH with nohup kill -HUP `cat /var/run/sshd.pid`; rm nohup.out & as before.

Setting up your router

There are so many routers out there that I can’t really help you out with this one. You will need to port forward a port number of your choosing to port 22 on your Synology NAS. If you’re not sure where to start, Port Forward is probably the most helpful resource on the Internet.

I used a high-numbered port on the outer edge of my router. I.e. I forwarded port 53268 to port 22 (SSH). This is only very mild protection, but it does reduce the number of script kiddie attacks. To put that in context, while I was testing this process I just forwarded the normal port 22 to port 22. Within 2 minutes, my NAS was emailing me about failed login attempts. Using a high-numbered port makes most of this noise go away.

To go one better however, I also used my router’s firewall to prevent unknown IP addresses from connecting to SSH. Since I’m only ever doing this from work, I can safely limit this to the IP range of my work’s leased line. This means it’s highly unlikely anyone will ever be able to brute force their way into my SSH connection, if I ever carelessly leave password-based logins enabled.

Create a PuTTY configuration

I recommend creating a PuTTY configuration using PuTTY’s interface. This is the easiest way of setting all the options that will later be used by plink in my batch script. plink is the stripped down command-line interface for PuTTY.

Within this configuration, you need to set:

  1. Connection type: SSH
  2. Hostname and port: your home external IP address (or DNS name) and the port you’ve forwarded through your router, preferably a non-standard port number.
  3. Connection > Data > Auto-login username: Put your Synology user name here.
    PuTTY_02
  4. Connection > SSH > Auth > Private key file for identification: here put the path to the location of your private key on your work machine, from where you’ll be initiating the connection back to home.
  5. Connection > SSH > Tunnels: This bears some explanation. When you run VNC viewer on your work machine, you’ll connect to a port on your work machine. PuTTY forwards this then through the SSH tunnel. So here you need to choose a random “source port” (not the normal VNC port, if you’re also running VNC server on your work machine). This is the port that’s opened on your work machine. Then in the destination, put the LAN address of your home PC and add the normal VNC port. Like this:
    PuTTY_01
    Make sure you click Add.
  6. Finally, go back to Session, type a name in the “Saved Session” field and click “Save”. You will then be able to use this configuration with plink for a fully-automatic login and creation of the SSH tunnel.
    PuTTY_03

Now would be a good time to fire up this connection and check that you can login okay, without any password prompts or errors.

Using username "rob".
Authenticating with public key "Rob's SSH key"

BusyBox v1.16.1 (2014-05-29 11:29:12 CST) built-in shell (ash)
Enter ‘help’ for a list of built-in commands.


RobNAS1>

Making VNC connection

I would suggest keeping your PuTTY session open while you’re setting up and testing your VNC connection through the tunnel. This is really the easy bit, but there a couple of heffalump pits, which I’ll now warn you about. So, assuming your VNC server is running on your home PC and your SSH tunnel is up, let’s now configure the VNC viewer at the work end. Those heffalump pits:

  1. When you’re entering the “Remote Host”, you need to specify “localhost” or “127.0.0.1”. You’re connecting to the port on your work machine. Don’t enter your work machine’s LAN ip address – PuTTY is not listening on the LAN interface, just on the local loopback interface.
  2. You need to specify the random port you chose when configuring tunnel forwarding (5990 in my case) and you need to separate that from “localhost” with double colons. A single colon is used for a different purpose, so don’t get tripped up by this subtle semantic difference.

TightVNC_04

If you have a working VNC session at this point, congratulations! That’s the hard work out of the way.

It would be nice to automate the whole connection process. While you have your VNC session established, it is worth saving a VNC configuration file, so you can use this in a batch script. Click the VNC logo in the top left of the VNC session, then “Save session to a .vnc file”. You have the option to store the VNC password in this file, which I’ve chosen to do.

TightVNC_05

Before saving the session, you might want to tweak some optimization settings. This will really vary depending on your preferences and the speed of your connection. On this subject, this page is worth a read. I found I had the best results when using Tight encoding, with best compression (9), JPEG quality 4 and Allow CopyRect.

One batch script to rule them all

To automate the entire process, bringing up the tunnel and connecting via VNC, you might like to amend the following batch script to fit your own environment:

@echo off
start /min "SSH tunnel home" "C:\Program Files (x86)\PuTTY\plink.exe" -N -batch -load "Home with VNC tunnel"
REM Use ping to pause for 2 seconds while connection establishes
ping -n 2 localhost > NUL
"C:\Batch scripts\HomePC.vnc"

I suggest creating a shortcut to this batch file in your Start menu and setting its properties such that it starts minimised. While your SSH tunnel is up, you will have a PuTTY icon on your task bar. Don’t forget to close this after you close VNC, to terminate the tunnel link. An alternative approach is to use the free tool MyEnTunnel to ensure your SSH tunnel is always running in the background if that’s what you want. I’ll leave that up to you.

DSM Upgrades

After a DSM upgrade, you may find that your SSH config resets and you can no longer use VNC remotely. In that eventuality, log into your NAS via your LAN (as admin) and change the config back as above. You’ll also probably need to chmod busybox again.

root locked out of SSH?

For the first time in my experience, during May 2015, a DSM upgrade reset the suid bit on Busybox (meaning no more su), but didn’t reset the PermitRootLogin setting. That meant that root could not log in via SSH. Nor could you change to root (using su). If you find yourself in this position, follow these remedial steps:

  1. Go to Control Panel > Terminal & SNMP
  2. Check the “Enable Telnet service” box.
  3. Click “Apply”.
  4. Log in as root, using Telnet. You can either use PuTTY (selecting Telnet/port 23 instead of SSH/port 22) or a built-in Telnet client.
  5. At the prompt, enter chmod 4755 /bin/busybox.
  6. Go to Control Panel > Terminal & SNMP
  7. Uncheck the “Enable Telnet service” box.
  8. Click “Apply”.

Do make sure you complete the whole process; leaving Telnet enabled is a security risk, partly because passwords are sent in plain text, which is a Very Bad Thing.

Conclusion

So, what do you think? Better than LogMeIn/TeamViewer? Personally I prefer it, because I’m no longer tied to a third party service. There are obvious drawbacks (it’s harder to set up for a start! and if you firewall your incoming SSH connection, you can’t use this from absolutely anywhere on the Internet) but I like it for its benefits, including what I consider to be superior security.

Anyway, I hope you find this useful. Until next time.

Subway Tunnel image copyright © Jösé, licensed under Creative Commons. Used with permission.

How-to: Laravel 4 tutorial; part 6 – virtualised development environment – Laravel Homestead

[easyreview title=”Complexity rating” icon=”geek” cat1title=”Level of experience required, to follow this how-to.” cat1detail=”We’re pulling together a few sophisticated components here, but keep your eye on the ball and you’ll be okay.” cat1rating=”4″ overall=”false”]

Laravel Tutorials

Introduction

It has been a while since I have had time to work on Laravel development or indeed write a tutorial. And since then, I have decommissioned my main web development server in favour of a Synology NAS. Dummy and I use a third party hosting service for hosting our clients’ web sites and our own. This shared hosting service comes with limitations that make it impossible to install Laravel through conventional means

So instead, I’m setting up a virtual development environment, that will run on the same laptop that I use for code development. Once development is complete, I can upload the whole thing to the shared hosting service. Getting this set up is surprisingly complicated, but once you’ve worked through all these steps, you’ll have a flexible and easy-to-use environment for developing and testing your Laravel websites. This article assumes we’re running Windows.

Components

  • virtualbox_iconVirtualBox enables you to run other operating systems on existing hardware, without wiping anything out. Your computer can run Windows and then through VirtualBox, it can run one or more other systems at the same time. A computer within a computer. Or in this case, a server within a computer. Download VirtualBox here and install it.
  • vagrant_iconVagrant enables the automation of much of the process of creating these “virtual computers” (usually called virtual machines). Download Vagrant here and install it.
  • git_iconGit for Windows. If you’re a developer, chances are you know a bit about Git, so I’ll not go into detail here. Suffice it to say that you’ll need Git for Windows for this project: here.
  • PuTTY_iconPuTTY comes with an SSH key pair generator, which you’ll need if you don’t already have a public/private key pair. Grab the installer here.
  • php_iconPHP for Windows. This is not used for powering your websites, but is used by Composer (next step). I suggest downloading the “VC11 x86 Thread Safe” zip file from here. Extract all files to C:\php. There’s no setup file for this. Rename the file php.ini-development to php.ini and remove the semicolon from the line ;extension=php_openssl.dll. Find a line containing “;extension_dir” and change it to extension_dir = "ext"
  • composer_iconComposer for Windows. Composer is a kind of software component manager for PHP and we use it to install and set up Laravel. Download the Windows installer here and install.

SSH key pair

You’ll need an SSH key pair later in this tutorial. If you don’t already have this, generate as follows:

  1. Start PuTTY Key Generator. (It may be called “PuTTYgen” in your Start Menu.)
  2. I would suggest accepting the defaults of a 2048-bit SSH-2 RSA key pair. Click “Generate” and move your mouse around as directed by the program.
    PuTTY key generation
  3. You can optionally give the key a passphrase. If you leave the passphrase blank, you can ultimately use your key for password-less logins. The drawback is that if your private key ever falls into the wrong hands, an intruder can use the key in the same way. Make your decision, then save the public and private keys. You should always protect your private key. If you left the passphrase blank, treat it like a plain text password. I tend to save my key pairs in a directory .ssh, under my user folder.
  4. Use the “Save private key” button to save your public key (I call it id_rsa).
  5. Don’t use the “Save public key” button – that produces a key that won’t work well in a Linux/Unix environment (which your virtual development box will be). Instead, copy the text from the “Key” box, under where it says “Public key for pasting into OpenSSH authorized_keys file:”. Save this into a new text file. I call my public key file “id_rsa.pub”.

Install the Laravel Installer (sounds clumsy, huh?!)

  1. Load Git Bash.
    Git Bash window
  2. Download the Laravel installer with this command: composer global require "laravel/installer=~1.1". This will take a few minutes, depending on the speed of your connection.
  3. Ideally, you want the Laravel executable in your system path. On Windows 7/8, from Windows/File Explorer, right-click “My Computer”/”This PC”, then click Properties. Click Advanced System Settings. Click the Environment Variables button. Clik Path in the System variables section (lower half of the dialogue) then click Edit. At the very end of the Variable value field, add “;%APPDATA%\Composer\vendor\bin”.
    Set PATH
    Click OK as needed to save changes. Git Bash won’t have access to that new PATH variable until you’ve exited and restarted.

Create Laravel Project

All your Laravel projects will be contained and edited within your Windows file system. I use NetBeans for development and tend to keep my development sites under (e.g.): C:\Users\Geek\Documents\NetBeansProjects\Project World Domination. Create this project as follows:

  1. Fire up Git Bash. This makes sure everything happens in the right place. The remaining commands shown below are from this shell.
  2. Change to the directory above wherever you want the new project to be created.
    cd ~/NetBeansProjects
  3. Install Laravel:
    laravel new "Project World Domination"
    Note: if the directory “Project World Domination” already exists, this command will fail with an obscure error.
  4. That’s it for this stage. Were you expecting something more complicated?

Laravel Homestead

Homestead is a pre-built development environment, consisting of Ubuntu, a web server, PHP, MySQL and a few other bits and bobs. It’s a place to host your Laravel websites while you’re testing them locally. Follow these steps to get it up and running:

  1. From a Git Bash prompt, change to your user folder. Make sure this location has sufficient space for storing virtual machines (800MB+ free).
    cd ~
  2. Make the Homestead Vagrant “box” available to your system.
    vagrant box add laravel/homestead
    This downloads 800MB or so and may take a while.
  3. Clone the Homestead repository into your user folder.
    git clone https://github.com/laravel/homestead.git Homestead
    This should be pretty quick and results in a new Homestead folder containing various scripts and configuration items for the Homestead virtual machine.
  4. Edit the Homestead.yaml file inside the Homestead directory. In the section “authorize”, enter the path to your public SSH key (see above). Similarly, enter the path to your private key in the “keys” section.
  5. Vagrant can easily synchronise files between your PC and the virtual machine. Any changes in one place instantly appear in the other. So you could for example in the “folders” section, map C:\Users\Fred\Code (on your Windows machine) to /home/vagrant/code (on the virtual machine). In my case, I’ve got this:
    folders:
    - map: ~/Documents/NetBeansProjects
    to: /home/vagrant/Projects
  6. We’re going to create a fake domain name for each project. Do something like this in the Homestead.yaml file:
    sites:
    - map: pwd.localdev
    to: /home/vagrant/Projects/Project World Domination/public

    Of course, if you put “http://pwd.localdev” in your browser, it will have no idea where to go. See the next section “Acrylic DNS proxy” for the clever trick that will make this all possible.
  7. To fire up the Homestead virtual environment, issue the command vagrant up from the Homestead directory. This can take a while and may provoke a few security popups.

Here’s the complete Homestead.yaml file:

---
ip: "192.168.10.10"
memory: 2048
cpus: 1

authorize: ~/.ssh/id_rsa.pub

keys:
- ~/.ssh/id_rsa

folders:
- map: ~/Documents/NetBeansProjects
to: /home/vagrant/Projects

sites:
- map: pwd.localdev
to: /home/vagrant/Projects/Project World Domination/public

variables:
- key: APP_ENV
value: local

At this point, you should be able to point your browser to http://127.0.0.1:8000. If you have created a Laravel project as above, and everything has gone well, you’ll see the standard Laravel “you have arrived” message. The Homestead virtual machine runs the Nginx webserver and that webserver will by default give you the first-mentioned web site if you connect by IP address.

Laravel landing page

VirtualBox is configured to forward port 8000 on your computer to port 80 (the normal port for web browsing) on the virtual machine. In most cases, you can connect directly to your virtual machine instead of via port forwarding. You’ll see in the Homestead.yaml file that the virtual machine’s IP address is set to 192.168.10.10. So generally (if there are no firewall rules in the way), you can browse to http://127.0.0.1:8000 or http://192.168.10.10 (the port number 80 is assumed, if omitted). Both should work. Personally I prefer the latter.

Acrylic DNS proxy

Of course we want to be able to host multiple development websites on this virtual machine. To do this, you need to be able to connect to the web server by DNS name (www.geekanddummy.com), not just by IP address. Many tutorials on Homestead suggest editing the native Windows hosts file, but to be honest that can be a bit of a pain. Why? Because you can’t use wildcards. So your hosts file ends up looking something like this:


127.0.0.1 pwd.local
127.0.0.1 some.other.site
127.0.0.1 override.com
127.0.0.1 webapp1.local
127.0.0.1 webapp2.local

(If you’re using 192.168.10.10, just replace 127.0.0.1 in the above example.) So this can go on and on, if you’re developing a load of different sites/apps on the same Vagrant box. Wouldn’t it be nice if you could just put a single line, 127.0.0.1 *.local? This simply doesn’t work in a Windows hosts file.

And this is where the Acrylic DNS proxy server comes in. It has many other great features, but the one I’m particularly interested in is the ability to deal with wildcard entries. All DNS requests go through Acrylic and any it can’t respond to, it sends out to whichever other DNS servers you configure. So it sits transparently between your computer and whatever other DNS servers you normally use.

The Acrylic website has instructions for Windows OSes – you have to configure your network to use Acrylic instead of any other DNS server. Having followed those instructions, we’re now most interested in is the Acrylic hosts file. You should have an entry in your Start menu saying “Edit Acrylic hosts file”. Click that link to open the file.

Into that file, I add a couple of lines (for both scenarios, port forwarding and direct connection, so that both work):

127.0.0.1 *.localdev
192.168.10.10 *.localdev

I prefer using *.localdev, rather than *.local for technical reasons (.local has some peculiarities).

This now means that I can now put in my Homestead.yaml file the following:

sites:
- map: site1.localdev
to: /home/vagrant/Projects/site1/public
- map: site2.localdev
to: /home/vagrant/Projects/site2/public
- map: site3.localdev
to: /home/vagrant/Projects/site3/public
- map: site4.localdev
to: /home/vagrant/Projects/site4/public
- map: site5.localdev
to: /home/vagrant/Projects/site5/public
- map: site6.localdev
to: /home/vagrant/Projects/site6/public
- map: site7.localdev
to: /home/vagrant/Projects/site7/public

and they will all work. No need to add a corresponding hosts file entry for each web site. Just create your Laravel project at each of those directories.

Managing MySQL databases

I would recommend managing your databases by running software on your laptop that communicates with the MySQL server on the virtual machine. Personally I would use MySQL Workbench, but some people find HeidiSQL easier to use. HeidiSQL can manage PostgreSQL and Microsoft SQL databases too. You can connect via a forwarded port. If you wish to connect directly to the virtual machine, you’ll need to reconfigure MySQL in the virtual machine, as follows:

  1. Start the Git Bash prompt
  2. Open a shell on the virtual machine by issuing the command vagrant ssh
  3. Assuming you know how to use vi/vim, type vim /etc/my.cnf. If you’re not familiar with vim, try nano, which displays help (keystrokes) at the bottom of the terminal: nano /etc/my.cnf
  4. Look for the line bind-address = 10.0.2.15 and change it to bind-address = *
  5. Save my.cnf and exit the editor.
  6. Issue the command service mysql restart
  7. You can now connect to MySQL using the VM’s normal MySQL port. Exit the shell with Ctrl-D or exit.

Okay, okay, why go to all this trouble? I just prefer it. So sue me.

Forwarded port Direct to VM
Port: 33060
Host: 127.0.0.1
User name: homestead
Password: secret
Port: 3306
Host: 192.168.10.10
User name: homestead
Password: secret

Managing your environment

Each time you change the Homestead.yaml file, run the command vagrant provision from the Homestead directory, to push the changes through to the virtual machine. And once you’ve finished your development session, run vagrant suspend, to pause the virtual machine. (vagrant up starts it again.) If you want to tear the thing apart and start again, run the command vagrant destroy followed by vagrant up.

How-to: Use custom DocumentRoot when hosting web sites on a Synology NAS

Synology DS214playI’ve recently taken the plunge and invested in a Synology NAS – the powerful DS214Play. Some of my colleagues have been raving about Synology’s NASes for a while and I thought it was about time I saw what all the fuss was about. This how-to article is not the place for a detailed review so suffice it to say I have been thoroughly impressed – blown away even – by the DS214Play.

The NAS is taking over duties from my aging IBM xSeries tower server. The server is a noisy, power-hungry beast and I think Mrs Geek will be quite happy to see the back of it. Life with a technofreak, eh. One of the duties to be replaced is hosting a few lightly-loaded web apps and development sites.

The NAS has fairly straightforward web hosting capabilities out of the box. Apache is available and it’s a cinch to set up a new site and provision a MySQL database. Problems arise however when you try to do anything out of the ordinary. Synology improves the NAS’s capabilities with every iteration of DSM (DiskStation Manager, the web interface), but at the time of writing, version 5.0-4482 of DSM doesn’t allow much fine tuning of your web site’s configuration.

A particular issue for anyone who works with web development frameworks (Laravel, CodeIgniter, CakePHP and the like) is that it’s really bad practice to place the framework’s code within the web root. I usually adopt the practice of putting the code in sibling directories. So, in the case of CodeIgniter for example, within a higher-level directory, you’ll have the system, application and public_html directories all in one place. Apache is looking to public_html to load the web site and then typically the index.php file will use PHP code inclusion to bootstrap the framework from directories not visible to the web server. Much better for security.

DSM doesn’t currently provide any way of customising the web root. All web sites are placed in a sub-folder directory under (e.g.) /volume1/web/. That’s it. When typing in the sub-folder field, forward and back slashes are not permitted.

Synology virtual host 01

This is my intended folder structure for an example CodeIgniter application:

volume1
|
\----web
     |
     \----test.domain.com
          |
          \-----application
          |
          \-----public_html
          |
          \-----system

Here’s how to do it.

  1. First, use the Control Panel to create your new virtual host. I give all my virtual hosts the same sub-folder name as the domain name. Here, let’s go for test.domain.com:
    Synology virtual host 02
  2. Very shortly, this new sub-folder should appear within your web share. You can place your framework in this folder. If you’re not yet ready to put the framework there, at least create the folder structure for the public folder that will equate to your Apache DocumentRoot. In my example, this would involve creating public_html within the test.domain.com directory.
    Synology virtual host 03
  3. Next, log in to your NAS as root, using SSH. We need to edit the httpd-vhost.conf-user file.cd /etc/httpd/sites-enabled-user
    vi httpd-vhost.conf-user
  4. The VirtualHost directive for your new web site will look something like this:

    ServerName test.domain.com
    DocumentRoot "/var/services/web/test.domain.com"
    ErrorDocument 403 "/webdefault/sample.php?status=403"
    ErrorDocument 404 "/webdefault/sample.php?status=404"
    ErrorDocument 500 "/webdefault/sample.php?status=500"


    Change the DocumentRoot line as required:

    ServerName test.domain.com
    DocumentRoot "/var/services/web/test.domain.com/public_html"
    ErrorDocument 403 "/webdefault/sample.php?status=403"
    ErrorDocument 404 "/webdefault/sample.php?status=404"
    ErrorDocument 500 "/webdefault/sample.php?status=500"

    Then save the file.

  5. UPDATE: Thanks to commenter oggan below for this suggestion – instead of the following direction, you can just issue the command httpd -k restart at the command line.
    There’s not a lot of information out there about causing Apache to reload its configuration files. I found that calling the RC file (/usr/syno/etc.defaults/rc.d/S97apache-sys.sh reload) didn’t actually result in the new config being used. Whatever the reason for this, you can force the new configuration to load by making some other change in the control panel for web services. For example, enable HTTPS and click Apply (you can disable again afterwards).
    Synology virtual host 04
  6. You will now find that Apache is using the correct web root, so you can continue to develop your web application as required.

NB: There’s a big caveat with this. Every time you make changes using the Virtual Host List in the Synology web interface, it will overwrite any changes you make to the httpd-vhost.conf-user user file. Until Synology makes this part of interface more powerful, you will need to remember to make these changes behind the scenes every time you add or remove a virtual host, for all hosts affected.

iPhone 6. The Rumour Mill – Likely Models, Versions and new Features

So we wait with baited breath to see what the next iteration of the Apple iPhone will be. With the competition putting out ‘iPhone killers’ Apple Bitealmost daily and nibbling into Apples market share, it seems time for something dramatic from the innovative tech Company.

Whilst still very much at the rumour mill stage, here is what the available evidence and info is strongly suggesting.

The next version of the iPhone is widely and will almost certainly be called the iPhone 6. It is scheduled for release in September of this year.

If the huge orders Apple has been placing in Japan with Sharp and in South Korea with LG, is anything to go by then the anticipated increase in screen size will become a reality. Initial reports and information leaked from those factories suggests we will be looking at two versions. The current models 4” screen will be scaled up into 2 new versions sporting either a 4.7” or a 5.5” screen. It goes without saying that this will be the high end resolution liquid crystal versions.

We can expect the iPhone 6 to be a far more powerful beast with an uprated processor and according to some sources; a major Taiwan Semiconductor Manufacturer has started a production run of these next generation A series ‘Apple A8’ chips. First reports are emerging of a very fast 2.6GHz chip.Apple Chip

As well as the increase in screen size we expect the iPhone 6 to be far thinner than it is now. Here at G&D we have read more than one report suggesting it could be as little as 5.5 mm, which is quite a significant change to the current design.

So what about that screen? It seems Apple may well be moving to an Ultra-Retina display with a pixel density pushing 389ppi. Design features will also include a durable Sapphire screen at long last. All this coupled with the larger screen sizes adds up to a mouth-watering combination that some would say is long overdue.

Other rumoured features that have been floating around cyber space,

Significant improvements to the camera with major changes to the aperture size and possibly moving to as much as an 8-megapixel camera. Some sources are even suggesting Apple has decided at last that the camera is an ever important aspect of a smartphone and Powerful iphone 6 cameramay go all out with a 10 mega-pixel version with an f/1.8 aperture complete with interchangeable lenses.

There is also a lot of hype about Apple going with a bezel-less display or at least playing with the iconic design feature to make it less prominent.

Personally, I’m a little worried that Apple may be finding it necessary to do battle with competitors on screen size. Once a smartphone doesn’t fit into my trouser pocket, it’s no longer a phone in my eyes. However if they can squeeze every available mm of front facing space into being a screen, that would be the way to go!

The Apple App store is also set for some changes and improvements but details are sketchy so far.

 

News: Motorola announces new smart watch Moto 360 – and it’s a beauty

One of the most interesting areas of development in the consumer technology industry is wearable tech. The segment is in its infancy and no one quite knows whether it will prove to turn out damp squibs or cash cows (if you’ll pardon the mixed metaphors). Top manufacturers are jostling for space with arguably premature “me too” gadgets that amount to little more than technology previews. There are even technology expos dedicated to this new sector.

Galaxy Gear - not great
Galaxy Gear – not great
When Samsung brought out its Galaxy Gear, I thought “we might have something here”. But the price was all wrong. I know the company can’t expect to ship many units at this stage in the game, but the opening price of £300 for a bleeding-edge, partially-formed lifestyle accessory kept all but the most dedicated technophiles firmly at bay. The Gear has failed to capture the public’s imagination and I think I know why. Putting aside the unconvincing claims that the Gear “connects seamlessly with your Samsung smartphone to make life easier on the go“, there’s one very big problem with this, and almost all other smart watches: it’s ugly.

Watches long since ceased to be simply pedestrian tools that tell you the time. They are fashion accessories. They express our individuality. Who wants to walk around toting one of these half-baked forearm carbuncles?

So I noted with interest Motorola’s announcement yesterday that the company is getting ready to launch the new round-faced, Android Wear-powered Moto 360.

Motorola D811 - stylish DECT answerphone
Motorola D811 – stylish DECT answerphone
MOTOACTV - ugly duckling
MOTOACTV – ugly duckling
Somewhat like Apple. Motorola has a reputation for adding its own design twist to everyday technology. I have a DECT cordless answering machine from Motorola, chosen largely on the strength of its looks, in a market where most of these devices have very similar capabilities.

Motorola’s previous attempt at a smart watch, the MOTOACTV, is frankly no supermodel. But if the MOTOACTV is the acne-ridden, orthodontic braces-sporting ugly duckling, the Moto 360 is the fully grown, airbrushed to perfection swan.

The Moto 360 in all its glory
The Moto 360 in all its glory

Just look at it. Now we’re onto something. Now we’ve got a watch where I wouldn’t have to spend all day persuading myself it’s pretty. Quite the contrary. I’m not that bowled over by the leather strap version, but in metal bracelet guise, I think we’re looking at a genuine designer item.

Pricing is yet to be announced, and no doubt it will be a long time before it’s stocked in UK stores. But this Geek hazards a guess that it will be worth the wait. Until it’s available, the only smart watch that comes close in terms of style in my humble opinion is the Pebble Steel, which is a little hard to come by, this side of the Atlantic.