I’m a fairly heavy user of apps on my mobile devices. I periodically review them, but it’s quite rare I discover an app that I can happily uninstall and live without. It’s not uncommon for me to have over a hundred apps installed, only one of which is a game. And I’m sure I’m not alone in this. It tells us, doesn’t it, what an invaluable tool the smart phone has become for life, work and play.
So initially when my phone (a Samsung Galaxy S5) told me I was running out of space, I wasn’t all that surprised. Time to review my app usage. Time to move some apps from device storage to SD. Time to clear out some on-device photos and videos. Which I duly did.
And then a few days later – running out of space again. Curious.
I chased it down to the IPsec service. Each time I freed up a bit more space, according to the app manager, IPsec Service expanded to fill the void. At the time of writing, it’s now consuming a wholly excessive 1.64GB – but as I’ve read around about this problem I see reports from Galaxy S6 users who have lost over 4GB to this service’s insatiable appetite. On a clean install by the way, it’s taking up 388KB.
As best we can tell, it’s due to some kind of memory leak in the IPsec service. This afflicts Android 5.0 – and as luck would have it, a few weeks ago I finally relented and upgraded my S5 to Lollipop. And it will be some time before the fix, in 5.1.1 is rolled out by my carrier – that’s if they ever do get round to it. The S5 is so last year, darling.
So what’s the workaround? Well, I wish I could give you good news. I’ve come up with nothing. This has taken me to the point of saying the heck with the warranty, I’m going to root it and flash a different ROM. What with this and the recent StageFright scare, it almost makes me want to move over to the Dark Side and buy an iPhone. Almost!
If you’ve found a more satisfactory solution, do let us know in the comments. Meanwhile, I’ll be getting to grips with CF-Auto-Root and finally releasing my handset from the whims of manufacturers and carriers. Wish me luck!
Like many tech enthusiasts Dummy and I have been keeping an eye on the smart watch market for a while. As you will probably know, there a few large companies (with the Chinese snapping at their heels) searching for the holy grail of wearables: a beautiful wristpiece that is elegant, convenient, clever and durable. To achieve widespread adoption, it also needs to be affordable. Ah yes, there’s the rub.
I recently stumbled across a smart watch, sometimes called “Aplus”, sometimes “GV18”. It’s fresh out of China. And it bears more than a passing resemblance to the Apple Watch. And it’s a tenth the price. We bought it for £32.98, but we’ve since seen it for under thirty quid. Worth a look then.
First impressions:
The watch doesn’t look quite as nice as the computer-generated photos on websites, but it’s still reasonably attractive, as smart watches go.
It’s big (13mm deep) and stands quite proud of the wrist.
The case has a captive screw on the back, which stands out by about 1.5mm. Not a huge problem, but it seems like a strange design choice because the screw is for looks only. The case pops off easily (too easily) and the hole the screw sits in is considerably larger than the diameter of the screw. So it turns freely.
The manual is poorly translated.
The watch comes with a screen protector pre-installed, which suggests the glass underneath will not be scratch-resistant.
The rubber strap is very comfortable.
Horribly irritating (loud) jingle when you first switch it on.
Timepiece
For me, the problem with most smart watches is the watch part. Sounds obvious doesn’t it. Really, what is the point of a watch that isn’t a very good watch? If I turn my wrist to check the time, but before I can see the time I have to press a button, that’s a retrograde step. That’s worse than analogue. And so it is with this watch. It’s an LCD display, not e-ink, and to keep the display lit permanently would be a huge battery drain. So you have to press the side button, to check the time.
Once you’ve done that, it’s not too bad. There’s a choice of three watch faces. One of these faces has a full dial of Roman numerals and is designed sympathetically with the rectangular case. I think it works. Of the other two, one is clumsy and the other is weird.
This slideshow requires JavaScript.
Interface
Oh dear it’s awful. To be honest, I think they probably all are, from all manufacturers. Anything that can’t be done with a press or a flick is a pain in the neck. Unless your fingers are like matchsticks, it’s hard to type letters with a high degree of accuracy on the software keyboard. It’s a little better with numbers, but still vaguely reminiscent of those calculator watches from the eighties. Is this really all the progress we’ve made in 30 years?
This slideshow requires JavaScript.
Apps
As far as I can tell, this is running a bespoke version of Android. There’s no app store, no access to Google Play. There are some bundled apps, but most of them are useless and half of them only work if you have inserted a SIM card. That alone is odd. The watch is designed to be paired with a smart phone. Why would you give it its own SIM card?
I wish I could tell you more about the apps, but most of them made no sense. The only real exceptions were the calculator and the camera. But both of those were such a fiddle to use, you’d be much more likely to reach for your phone. It has a pedometer, but it just doesn’t work.
Sync software
For the watch to talk to the phone, you have to install an app. The app is not the best. There are few settings. You can choose to ignore notifications from certain apps, but it’s a slow and laborious process choosing which apps you do and don’t want to hear from.
(Sorry about the poor screen grab by the way.)
If Bluetooth is switched off when you launch the notification app, you are greeted with the following informative message. Informative that is, if you can read Chinese.
I deduced this meant you need Bluetooth to be switched on… With Bluetooth switched on, the app needs to be running in order for the watch to receive notifications. The app seems to die all on its own, without warning, and the only way you’ll know that is if notifications stop arriving on the watch.
Specifications
Headline specs when compared to the similar size 42mm Apple Watch
Spec
Aplus GV18
Apple Watch 42mm
Screen
1.54″ capacitive
1.54″ capacitive
Battery
450mAh replaceable (though the battery in our unit was labelled 550mAh)
246mAh non-replaceable
Claimed battery life (talk time)
72 hours
3 hours
Thickness
12.3mm
12.6mm
Bluetooth
3.0
4.0 Low Energy
Processor
533MHz MTK6260A
Apple S1
Storage
128M
8GB
MicroSD/TF slot
Yes, 32GB max
No
Pixels
240×240
390×312
Sensors
accelerometer
accelerometer, heart rate
GPS
No
Yes
Phone
GSM/GPRS 850/900/1800/1900 (SIM slot)
Yes
Charging
Cable
Inductive
Weight
50g
51g
Camera
Yes, 1.3MP
No
NFC
Yes, built into strap
Yes
USB port
Micro USB
No
Flaws
There are many.
Convenience. Above all else, a watch should be two things: convenient and attractive. This is not convenient. If I glance at my wrist to see the time, I’m met with a blank screen. No “shake to wake”. You have to fumble for the button, which if like me you wear your watch on your left wrist, is quite awkward to reach.
Volume control. There is no obvious volume control for notifications.
Bluetooth music. You can stream music to your watch via Bluetooth. And listen to it on your watch’s tiny speaker. Which is probably inferior to the speaker in your phone. Which you’re streaming from (and which has to be within 10 metres, due to the limitations of Bluetooth). There’s no headphone socket. So what’s the point?
Time synchronisation. When the watch first connects to the phone, it asks if you want to sync the time. Since I live in the UK, my phone is set to GMT with daylight saving time. On syncing with the phone, even though the watch is set to the same time zone it changes itself to Amsterdam and puts the clock out by an hour.
Notifications. The pop up notifications are almost useless. They tell you for example that you’ve received an email, but there’s no way on the watch of seeing that email or even any context from the email. So you have to check your phone. So you may as well just check your phone, right?
Notifications again. There’s an option to switch off the notification tone. It doesn’t work. So, like it or not, if you have pop up notifications, you’re also going to have an annoying beep. And there’s no way of changing that beep. Which brings me to my next point.
Customisation. You can’t customise this watch – which is a huge loss. There are three watch faces (and two of them don’t suck too badly), but that’s all. You cannot add more. There are three themes for the menu/app system. Two of them are horrendous. The third is tolerable. You cannot add more. Oh, and apps? That deserves a bullet point of its own.
Apps. As I mentioned before, other than the few bundled with the watch, there aren’t any. There’s no equivalent of the iTunes or Google Play app stores. So you’re stuck with these apps.
Interface. You need fairly slender fingers to operate it – especially the software keyboard. Very hard to hit the right letter. And since there’s no voice control (see next bullet point), you’re stuck with touch/swipes.
Voice control. There isn’t any. And this is, we think, going to be crucial in this technology market. Watch faces will always be smaller than phone screens. It’s essential that you have a usable and convenient way of controlling them. That means you need either an external interface (keyboard? your phone?), which sort of defeats the point, or voice activation. Or maybe, fast forward 20 years, a neural interface. This watch has neither, by the way.
Style. In our opinion, the Moto 360 and the LG Watch Urbane are possibly the only smart watches right now that aren’t ugly. People will accept a certain level of aesthetic compromise in exchange for features (e.g. the massive “brick” phones of yesteryear), but not much. And with the 360 and Urbane on the market, all other smart watch manufacturers need to think long and hard about style.
Reliability. Bluetooth keeps disconnecting and reconnecting – even when the phone and watch remain next to each other. Is this the phone’s fault? The watch’s? Who knows. But every time they reconnect, the watch prompts you whether or not you want to sync time (you don’t, see above!) and then spits out all the notifications currently unviewed on the phone. Which are then a bit of a pain to acknowledge/delete.
Visibility. It’s really difficult to read the screen when outdoors. And when in strong sunlight, there’s no chance. There’s no brightness control, so there’s nothing you can do about this, other than shade the screen with your hand. And squint.
Build quality. The back is not secured well (because the case screw does nothing, see above). It doesn’t seem to fit well on the back of the watch. It wouldn’t drop off while wearing the watch, but may at other times.
Strengths
Style. Although it’s no Moto 360, it’s not as bad as some other watches available now. The brushed steel is nice.
Comfort. The rubber strap is surprisingly comfortable. It’s a little on the heavy/chunky side, but you get used to it.
Battery life. It lasted five days before needing a charge. How much this was to do with the fact it was essentially useless, I’m not sure (!) but it still knocks the spots off the Apple Watch in this particular department.
Conclusion
We have to give this watch some credit. For the price, it’s actually pretty incredible. It’s far less ugly than some of the competition and it does have a lot of functionality, even if it’s not especially well executed. We couldn’t help but think that in a world without smart phones, it would even be considered quite good. You could in theory load it up with a SIM card and use it as a watch, phone, calculator, contacts organiser and so on, without needing any other device. But this is a world with smart phones and when you compare it to any smart phone currently on the market, even the worst ones, this watch doesn’t compete at all well. And neither does it complement a phone, bringing no particular tricks to the party.
This slideshow requires JavaScript.
It was a bit of a conversation starter, while I wore it. A novelty. And if you don’t mind paying a little for a novelty item that you’ll quickly find tiresome, then by all means go ahead. But we couldn’t recommend it. We can’t even recommend the Apple Watch, and if Apple can’t get it right, who can?
[easyreview title=”Geek rating” icon=”geek” cat1title=”Ease of use” cat1detail=”Fiddly, fussy, idiosyncratic.” cat1rating=”1″ cat2title=”Features” cat2detail=”Lacking many essentials for a usable smart watch.” cat2rating=”1″ cat3title=”Value for money” cat3detail=”Very cheap, giving the (few) things it can do, but still not remotely worth buying.” cat3rating=”1.5″ cat4title=”Build Quality” cat4detail=”Mixed. Some good bits, some bad bits.” cat4rating=”2″ summary=”Don’t buy it, we beg you.”]
I own a Canon EOS 60D, which I bought second hand a couple of years ago. It’s a cracking camera and it was an absolute steal on the second hand market. But it’s not very portable. Not when you take into account the other things I stuff into my camera bag: my three main lenses, the filters, the remote shutter release, the lens hoods and so on.
Of course these days, many people carry a half-decent camera with them at all times, in their phones. These cameras aren’t very versatile, but they’re convenient because they’re almost always at hand. And because of this, there’s a healthy phone camera mod market. One of the leaders in this field is the Olloclip.
Olloclips are great. The trouble is, each Olloclip is designed for a particular phone (or small family of phones). So it’s not really transferable. And with prices in the order of £60, you can buy a pretty competent compact point-and-shoot for not much more than that. It’s clever, good quality, but not exactly a bargain. Not like today’s review kit at least.
This 3-in-1 camera kit, like many other Chinese gadgets can be found for sale on a few shopping sites, under various different “brand names”. Our example was sold as a “Yarrashop”, but we suspect that’s just the current trade name of this particular seller. The kit arrived in an anonymous box, with no manufacturer claiming responsibility. And we think that’s a shame, because as we reckon you’ll agree, it’s rather extraordinary.
In the box, there are three lenses, a bag and a clip. The bag doubles as a lens cleaning cloth. The clip, with rubber pads, enables you to attach the lenses to virtually any mobile phone or tablet.
One of the lenses is a fisheye lens. The other two can be used in combination, to form a wide angle lens, or you can use the smaller component on its own as a macro lens. The lenses and the clip are all sturdy metal, with a solid feel. They can be purchased in different colours, but we went for silver, which we think suits this kind of equipment.
This slideshow requires JavaScript.
The clip attaches securely on the phone or tablet. You do have to position it carefully – this is hardest with the fisheye lens; with the other two, you can see the phone’s camera lens underneath – but once it’s situated, taking photographs is no harder than usual.
This slideshow requires JavaScript.
With the fisheye lens, the photograph appears as though within a circle cut out from black card, so the photo would need cropping afterwards. The wide angle lens – I’m not sure there’s that much use for it; there’s some barrel distortion at the edges and in any event, most smart phones can stitch shots together into a panorama, which would be far superior. The macro lens, well that’s a cracker. You have to be be very close to the subject, so you’d be unlikely to be able to use this on nervous insects. And you probably don’t have a tripod for your phone, so you need a reasonably steady hand. But in spite of all that, the effect of the lens is impressive.
Here are some example shots, taken with the lens attached to a Samsung Galaxy S5. Click through for the full resolution images.
This slideshow requires JavaScript.
As long as you don’t compare this with DSLR quality, this is not bad at all, right? But then we get to the punchline. These lenses, clip included, will set you back less than £7. That’s unbelievable Seven quid. No matter who I’ve shown this to, when I’ve told them the price they have been incredulous. I still can’t believe it, to be honest. But the truth is shown in my Amazon orders history and on my bank statement.
Under close inspection, there is some loss of clarity and marginally less light hitting the sensor. But if you’re starting out with a very good phone camera, this slight degradation is we think more than acceptable, especially given the increased versatility. A few shots more:
This slideshow requires JavaScript.
You’d think there has to be a catch, wouldn’t you. It’s hard to find one actually. Separating the wide angle lens from the macro lens is s bit fiddly – and counter-intuitive too because it’s reverse-threaded. But not too difficult. And it would be nice to have a case for the lenses – the bag doesn’t do much to protect them. But given the price, we’re really splitting hairs. I dug out an old cufflink case and that was perfect for the job.
I’d say to anyone who takes the slightest interest in phone-based photography – get this kit. You won’t regret it. It’s an absolute bargain, well made and practical. As this price, what do you have to lose?
[easyreview title=”Geek rating” icon=”geek” cat1title=”Ease of use” cat1detail=”Very slightly fiddly. But otherwise extremely simple.” cat1rating=”4.5″ cat2title=”Features” cat2detail=”The kit lacks only a case.” cat2rating=”4.5″ cat3title=”Value for money” cat3detail=”Phenomenal value for money at this price.” cat3rating=”5″ cat4title=”Build Quality” cat4detail=”Well made. I wouldn’t be surprised if the odd unit has burrs on the thread or seams, but I saw no evidence of that here. Not the best optics, unsurprisingly.” cat4rating=”3.5″ summary=”All in all, an outstanding kit. Great as a gift, stocking filler, whatever. Or treat yourself, without really any feeling of guilt. You’d spend more on a couple of pints of beer and you know what happens to that. ;-)”]
It’s been quite a while since we’ve posted anything about Laravel. We’re strictly hobbyist developers here and in web development it’s almost impossible to keep up with the rate of change unless you’re a full time developer (and even then, it’s not easy). This pace of change of course means trouble not only for small-time developers like us, but also for enterprise users who favour stability over bleeding-edge features.
So the recent announcement is timely, that Laravel 5.1 is the first version to offer long term support (LTS). LTS in this case means two years of bug fixes and three years of security updates (as opposed to six months and one year respectively for other releases). And for us, this means that although our version 4 tutorials quickly became obsolete, our version 5 tutorials should have a chance of remaining relevant for the next three years. So we hope this new series will be useful for you, our readers.
Without further ado, let’s dive in.
Prerequisites
These days there’s a phenomenal number of ways to get up and running with a server – Vagrant, Puppet, Chef, Ansible and so on. For the purposes of this tutorial I’m going to assume the most basic requirements:
Apache web server (other web servers will work, but we won’t explicitly deal with them)
Shell access to the server (preferably SSH)
Root access to install Composer globally (not essential)
Git must be installed in your environment.
PHP >= 5.5.9
OpenSSL PHP Extension (probably compiled in to your PHP installation – check with phpinfo();)
Mbstring PHP Extension
Tokenizer PHP Extension
Install Composer
Composer is an integral part of Laravel these days. It’s used for managing dependencies – external libraries and the like, used by projects. It is also used to install Laravel. While logged in as root, to make Composer available globally, do:
The official Composer documentation suggests using mv composer.phar composer, but if you use a symbolic link instead, upgrading Composer is as simple as running curl -sS https://getcomposer.org/installer | php -- --install-dir=/usr/local/bin again.
Install Laravel
There are different ways of approaching this, but the approach I prefer (for its simplicity) is as follows. To install Laravel in the directory that will house your web project (e.g. if that’s under /var/www), enter:
There will be a lot of activity in the console as all Laravel’s various components are installed. The new website directory contains a folder “public” and it’s to this you need to direct your web server. So for example, with Apache, create a new configuration file /etc/apache2/sites-available/new.website.name.conf:
<VirtualHost *:80>
ServerName new.website.name
DocumentRoot "/var/www/new.website.name/public"
<Directory "/var/www/new.website.name/public">
allow from all
Options +Indexes
</Directory>
</VirtualHost>
Again, for Apache, enable the new website (e.g.):
a2ensite new.website.name
If you’re using a control panel (CPanel, Plesk, VirtualMin, etc.) your steps will vary. When you then browse to your new site, you should see something like this:
Configuration
There’s lots you can configure, but here are some basics.
Make sure the storage and the bootstrap/cache directories are writeable by the web server. E.g.: chown -R www-data:www-data /var/www/new.website.name/storage
chown -R www-data:www-data /var/www/new.website.name/bootstrap/cache
find /var/www/new.website.name/storage -type f -exec chmod ug+rw {} \;
find /var/www/new.website.name/storage -type d -exec chmod ug+rwx {} \;
find /var/www/new.website.name/bootstrap/cache -type f -exec chmod ug+rw {} \;
find /var/www/new.website.name/bootstrap/cache -type d -exec chmod ug+rwx {} \;
In config/app.php set your time zone (e.g.): 'timezone' => 'Europe/London',
Choosing something decent from that vast array of choices is no mean feat. We started out with a basic task: set four of the best sub-£30 speakers against each other, assess them as aesthetically and as scientifically as we can (not that scientific – we’re a Geek and a Dummy, not high-end audiophiles!) and come up with a winner. Not easy, as you’ll see!
The four contenders
For this review, we’ve picked four of the highest-rated speakers on the market for around £30 (at the time of review – these prices can be quite volatile). Starting clockwise from the top left of the picture, with prices at the time of our purchase, they are:
Let’s start with the speaker that was (just) the cheapest of the four: “The Elf”. The Elf is a pretty anonymous black box. All these speakers have a matt rubberized finish; in the case of the Elf and its brother (more on that shortly), the finish is a little on the cheap side.
It has a full complement of six buttons: track skip (forwards/backwards), volume up/down, play/pause and call answer. This suits me far better than the minimalist single button approach. I don’t want to memorize how many seconds I need to hold a button or how many presses correspond to each particular action.
Pairing with my Android phone was simple and easy. The speaker confirms connection in an excessively loud female American voice saying “Connected”. Not very subtle when you’re trying to set up some quiet tunes in the morning. And there’s something about it that’s a little… cheesy.
Though we didn’t test this exhaustively, the speaker seemed to live up to the claimed charge time of 3-4 hours and playback time of 10-12. It was in that ballpark. And it was about the loudest speaker in this group test – we could turn this one up the most, before distortion crept in. Bluetooth range was pretty reasonable – about 15 metres before the connection started to drop.
All the speakers can be used as hands-free speaker phones, and this one was the best of the bunch. Good, clear call quality, and it handled the problems of two-way audio (avoiding feedback) very competently. I’m not sure that’s why people buy speakers like this, but in a pinch, you can use this as a conference phone with little difficulty.
The Elf was the heaviest and the cheapest (at the time of purchase) speaker in this group and of the four, it’s the one I kept personally. It didn’t have the widest frequency response, but it is more than adequate – good, in fact for the use I now put it to daily: music in the shower.
Incidentally, if you’re looking for this speaker, just use the “WS-701” search term – it’s currently available under a different brand name, “Coppertech”.
It’s fair to say that Chinese technology companies aren’t renowned for respecting the intellectual property rights of other companies. I mean, “Bolse”. Come on guys. Next you’ll be calling yourselves “Microsloft”.
After the slightly comical name, the next thing you notice about this speaker is how similar it is to the Elf. Virtually identical in appearance, in fact. The model names are similar too – “WS-701” vs. “SZ-801”. In fact, the only major difference between this speaker and the slightly cheaper Elf, is that the Bolse has NFC, which we’ll come to in a second.
Here at Geek & Dummy, we don’t pretend to be technology insiders. We really are just a regular Geek and a regular Dummy. So we’re just going to conclude what everyone else knows is blatantly obvious: the two speakers came out of the same factory. The Bolse is a later or upgraded version of the Elf. Who knows if “Bolse” and “Elf” even exist as trading entities.
Given their similarity (the grille pattern is very slightly different), you’ll not be surprised to read that they fared almost identically in our tests. I found the NFC to be little more than a gimmick. Place your NFC-equipped Android phone (sorry, no iLove here, apparently!) and Bluetooth is automatically switched on and the phone and speaker automatically paired. Given that pairing and switching on Bluetooth aren’t exactly onerous tasks, I’m not sure I’d say this feature is worth the extra £5 you pay for it.
Again, in comparison to the Elf, playback time is down to 8-10 hours (from 10-12). The box claims it is a more powerful speaker (12W RMS vs. 10W) but in our tests, it distorted earlier than the Elf, indicating slightly poorer speaker construction. And hands-free call quality wasn’t bad, but slightly worse than the Elf, sounding “fuzzy” on the other end of the call.
The Bolse comes with a horrible drawstring bag, that you probably wouldn’t want to use for storage. The included audio cable is a little better than that included with the Elf.
In short, when placing the Elf WS-701 alongside the Bolse SZ-801, we’d only choose the Bolse if it were the same price as the Elf.
This is the lightest of the four speakers on test, weighing in at just 270g. It has just three buttons (forward. back and pause/play/answer). In our opinion, it’s the ugliest speaker on offer here today and it has the poorest battery of the set, at just 800mAh.
The Tecevo does have a few unique tricks up its sleeve though. First, it does come in other colours than black. Second, it has phenomenal Bluetooth range: 90 feet (27 metres) – by far the best range of any of these speakers. This far exceeds the typical range of Bluetooth devices.
And finally, which is perhaps most interesting, the Tecevo has an audio output socket (n addition to the input socket). This doesn’t mean you can daisy-chain speakers – the sound cuts out when you plug a lead into the output socket), but it does mean you can effectively use this speaker to Bluetooth-enable any other music system. Connect it to your ancient-but-good hifi, and stream tunes from your phone. Nice. Make sure it’s plugged into a USB charger though – the battery will give up the ghost before any of the competition.
Not that it matters much, but you wouldn’t want to use this speaker as a hands-free device. Calls sound like you’re in a tunnel, with lots of echo.
This just leaves the Anker. Anker is making a good fist of emerging as a credible purveyor of gadgets, in a very crowded marketplace. We’ve seen a few items from Anker now, and they do stand out in the crowd: manuals that read like the writer does actually speak English, well-packaged, well-finished and with good warranties. The warranty on this speaker for example, is 18 months, which is not bad at all.
The Anker is a different form factor to the others. It’s square, rather than rectangular and houses a single large speaker, rather than the twin speakers in the others. It’s reassuringly chunky and the soft touch rubber finish has the highest quality feel of the speakers in this group.
The Anker has the longest claimed playback time, at 15-20 hours. We can well believe it, given it has the largest capacity battery (2100mAh) and takes the longest to charge (5 hours). The larger battery contributes to the general feeling of solidity. Without doubt it stands out for the quality of its construction.
It’s the most up to date speaker too, following version 4.0 of the Bluetooth specification. It suffers with range though, dropping out at just 10 metres (33 feet). It not the loudest either, and its bass response, though adequate, isn’t quite as good as the others. It’s also not great as a hands-free speaker.
This slideshow requires JavaScript.
Conclusion
So, which would we choose? If quality and aesthetics are most important to you, the Anker is the superior choice. But for us, the Elf is the clear winner, with its all-round abilities. And for a speaker this size, the sound quality is more than adequate. For sure, it’s no Bose, but then it’s a fraction of the price. And you wouldn’t want to take your expensive Bose into the bathroom with you – whereas with this, no problem. And helpfully, at the time we purchased, it was the cheapest of them all. Job done: buy the Elf (a.k.a. Coppertech).
If you’re interested in all the data we captured and used for this review, here’s a spreadsheet you might enjoy. For the Geeks among us. 🙂
I’ve been using HomePlug AV adapters at home for years. These excellent devices turn your ring mains into a LAN, incredibly routing network data over your electrical cabling. As long as all your sockets are on the same phase, you can put a network socket wherever you have an electrical socket.
This is excellent news if you need to get Internet to some remote corner of your house, where your wifi doesn’t reach, but don’t want to trash the joint, installing network cabling. Plug one HomePlug device into a socket near your router and another wherever else you need it. That’s pretty much job done. Some of these devices can transmit data at up to 500Mbps, which is pretty impressive.
After six years of constant use, my old ZyXEL PLA-401s started becoming less and less reliable. I had four of these – one by the downstairs router, one upstairs plumbed into a separate wireless access point, another in the garage, likewise and finally one in the loft for my servers (which I’ve since retired, in favour of an excellent all-singing, all-dancing Synology DS214Play). Over time, the ZyXELs along with the two WAPs have developed some idiosyncrasies, needing occasional restarts. They were running a little hot and that’s never a good thing. Well they’ve provided good service, so nothing lost.
Ideally, I wanted to retire all the existing HomePlug adapters, plus the two wireless access points – and to do that in a cost-effective manner. My search brought me to TP-Link’s triple pack, the memorably named “TL-WPA4220T Kit“. £80 gets you three Powerline adapters, two of which are also wireless access points, with twin ethernet ports. Turns out this exactly matched my requirements, since I no longer needed a device in my loft. One device to plug into my wireless broadband router, one to provide wireless upstairs (and connect to two adjoining cabled devices) and one for the garage to provide wireless access in our garden.
TP-Link is one of the more reputable electronics manufacturers to send us gadgets out of China. Still, I’ve had a few run-ins with Chinese electronics, especially relatively cheap devices like these Powerline adapters, so I wasn’t expecting things to be entirely straightforward.
My first impression was favourable. My old ZyXEL adapters look clunky and old-fashioned next to these sleek, shiny gizmos. Clearly over the last few years, like all technology, the adapters have shrunk; and the pressure of certain design-led technology manufacturers has persuaded others to give aesthetics at least a token consideration prior to launch. There are two larger white adapters in the box (separately available as TL-WPA4220s) and a smaller grey-faced TL-PA4010. The smaller adapter is not wifi-enabled – you connect this one to your wireless router and it “introduces” your Internet connection to all other adapters (via the mains).
I read the promises of the simple “plug & play” (oh how nineties!) setup with a degree of skepticism. The poorly translated manuals did not instill confidence (though I’ve seen far worse). That said, these are consumer devices and I’m a Geek, so you’d think it wouldn’t be insurmountable. 😀
First, the problems. I could not get WPS to work. The idea is that you press the WPS button on your router, then the “wifi clone” button on the Powerline WAPs, and the network settings are automatically copied. I tried this every which way. You’ll appreciate that I’m no Dummy when it comes to these things, but it just wasn’t happening. Possibly my DrayTek router speaks a slightly different dialect of WPS. The TP-Links couldn’t understand the accent.
Another problem came from the fact that I attempted to set up the TP-Link adapters while the old ZyXELs were still installed. I half-expected that this would cause trouble. I was right. Ah well. A couple of factory resets later and with the ZyXELs unplugged we were working much better.
One more problem – though the TP-Links came with a CD, my laptop doesn’t possess a CD drive. I proxied the files via another CD-equipped device, only to find that the software included on the disk didn’t really work well under Windows 8. Doh!
Never mind. If you find yourself in this situation, do what I did: head over to TP-Link’s download site and grab everything you need from there.
Happily, once I had the correct software, I was able (easily) to log onto the wireless-enabled TP-Links, and enter all the wifi settings (twice, one for each WAP). With this all done, with the three devices talking to each other and the two wireless-enabled devices offering the same authentication requirements as my router, everything is now working brilliantly.
A great bonus for me, is the huge signal you get from the WAPs. I’d already upgraded my DrayTek router with larger antennae, but the TP-Link WAPs just blew it away. Twice the signal strength and much better connectivity all round my house (and up my garden). The drawback is that someone can now wardrive my network from the next county, but hey, that’s small price to pay for speed, right?
[easyreview title=”Geek rating” icon=”geek” cat1title=”Ease of use” cat1detail=”Some problems with WPS, but not too difficult to rectify.” cat1rating=”4″ cat2title=”Features” cat2detail=”High speed, extra LAN ports, WAPs included in the kit, MAC filtering if you want it – all in all, pretty impressive feature set.” cat2rating=”4.5″ cat3title=”Value for money” cat3detail=”Basically no kit that I’ve seen beats this on value.” cat3rating=”5″ cat4title=”Build Quality” cat4detail=”Looks really well built. Solid plastics. Reasonably attractive as these things go and not too big.” cat4rating=”4.5″ summary=”For me, this kit was great value for money. I wouldn’t have bought it otherwise. I have no hesitation recommending it.”]
I don’t think this can be that uncommon a scenario: a Windows Server 2008 R2 domain, with mainly HP printers. New domain controller added (at new site), this time running Windows Server 2012 R2; HP printers there too.
This was the position I found myself in earlier this year. On paper, there’s nothing unusual about this set-up. Adding new 2012 DCs and standard HP workgroup printers shouldn’t be a problem. That’s what we all thought.
Until the domain controller started becoming non-responsive.
Cue many, many hours on TechNet and various other similar sites, chasing down what I became increasingly sure must be some latent fundamental corruption in Active Directory (horrors!), revealed only by the introduction of the newer o/s. There were many intermediate hypotheses. At one point, we thought maybe it was because we were running a single DC (and it was lonely). Or that the DC was not powerful enough for its file serving and DFS replication duties. So I provisioned a second DC. Ultimately I failed all services over to that because the first DC was needing increasingly frequent reboots.
And then the second domain controller developed the same symptom.
Apart from the intermittent loss of replication and certain other domain duties, the most obvious symptom was that the domain controller could no longer initiate DNS queries from a command prompt. Regardless of which DNS server you queried. Observe:
*** UnKnown can't find bbc.com: No response from server
Bonkers, right? Half the time, restarting AD services (which in turn restarts file replication, Kerberos KDC, intersite messaging and DNS) brought things back to life. Half the time it didn’t, and a reboot was needed. Even more bonkers, querying the DNS server on the failing domain controller worked, from any other machine. DNS server was working, but the resolver wasn’t (so it seemed).
I couldn’t figure it out. Fed up, I turned to a different gremlin – something I’d coincidentally noticed in the System event log a couple of weeks back.
Event ID 4266, with the ominous message “A request to allocate an ephemeral port number from the global UDP port space has failed due to all such ports being in use.”
What the blazes is an ephemeral port? I’m just a lowly Enterprise Architect. Don’t come at me with your networking mumbo jumbo.
Oh wait, hang on a minute. Out of UDP ports? DNS, that’s UDP, right?
With the penny slowly dropping, I turned back to the command line. netstat -anob lists all current TCP/IP connections, including the name of the executable (if any) associated to the connection. When I dumped this to a file I quickly noticed literally hundreds of lines like this:
As it happened, this bit of research coincided with the domain controller being in its crippled state. So I restarted the Print Spooler service, experimentally. Lo and behold, the problem goes away. Now we’re getting somewhere.
Clearly something in the printer subsystem is grabbing lots of ports. Another bell rang – I recalled when installing printers on these new domain controllers that instead of TCP/IP ports, I ended up with WSD ports.
What on earth is a WSD port?! (Etc.)
So these WSD ports are a bit like the Bonjour service, enabling computers to discover services advertised on the network. Not at all relevant to a typical Active Directory managed workspace, where printers are deployed through Group Policy. WSD ports (technically monitors, not ports) are however the default for many new printer installations, in Windows 8 and Server 2012. And as far as I can tell, they have no place in an enterprise environment.
Anyway, to cut a long story short (no, I didn’t did I, this is still a long story, sorry!), I changed all the WSD ports to TCP/IP ports. The problem has gone away. Just like that.
I spent countless hours trying to fix these domain controllers. I’m now off to brick the windows* at Microsoft and HP corporate headquarters.
Hope this saves someone somewhere the same pain I experienced.
When it comes to singling out and “claiming” verses of scripture, proponents of the Word of Faith movement don’t have a monopoly. From Conservative to Charismatic, Evangelical to Eastern Orthodox, Christians love clinging onto comforting extracts from the Word of God. And this is right and commendable. Continue reading “Jeremiah 29:11 – a verse out of context?”
Have you ever needed to write down a web site address, or worse – type it into a text message? And it’s something like http://www.someboguswebsite.com/this/is/a/painfully/long/url. Tedious, right? Or have you needed to paste an address into a tweet, but you’ve come up against the maximum character limit?
Micro Minibus by Robert Couse-BakerIn the case of Twitter, chances are you’ve used Twitter’s URL shortener of choice, Bitly. In this case, the awful, long URL becomes http://bit.ly/1xXCa5h – 21 characters instead of 60. Quite a trick. So you use the shortened URL for convenience, pass it on via social media or SMS and this is magically transformed into the original URL, upon use.
Recently in my very geeky news feed, I came across Polr, a self-hosted URL shortener. What a wheeze! Grab yourself a suitable domain, and you can poke your tongue out at Twitter, Google and the like, with all their evil data-mining ways.
It was surprisingly easy to get up and running with Polr, in our case using a virtual server hosted with Amazon. We bought a nifty little domain, gd1.uk and off we go! To be honest, the most time-consuming part was tracking down a short domain name – there aren’t many about.
All this is a roundabout way of saying, please feel free to use our brand shiny new URL shortener. Because it’s so young, the URLs generated really are very short. https://geekanddummy.com for example is now http://gd1.uk/1 – just 15 of your precious characters.
Yes, we’ve had to put adverts on it. Server hosting ain’t free. But we won’t charge you for using the service and we have no wicked designs on your data. Promise.
So go to it. Bookmark gd1.uk and enjoy the majesty, the awe of the world’s best* URL shortener. GD1 – it’s a good one.
As you may know from other articles here, I have a Synology DS214Play NAS, and I’m a big fan. It can do so much for so little money. Well, today I’m going to show you a new trick – it will work for most Synology models.
There are a few different ways of remotely connecting to and controlling computers on your home network. LogMeIn used to be a big favourite, until they discontinued the free version. TeamViewer is really the next best thing, but I find it pretty slow and erratic in operation. It’s also not free for commercial use, whereas the system I describe here is completely free.
Many people extol the virtues of VNC, but it does have a big drawback in terms of security, with various parts of the communication being transmitted unencrypted over the network. That’s obviously a bit of a no-no.
The solution is to set up a secure SSH tunnel first. Don’t worry if you don’t know what that means. Just think about this metaphor: imagine you had your own private tunnel, from your home to your office, with locked gates at either end. There are no other exits from this tunnel. So no one can peek into it to see what traffic (cars) is coming and going. An SSH tunnel is quite like that. You pass your VNC “traffic” (data) through this tunnel and it is then inaccessible to any prying eyes.
Assumptions
This guide assumes the following things:
You have a Synology NAS, with users and file storage already configured.
You have at home a Windows computer that is left switched on and connected to your home network while you’re off-site.
Your home PC has a static IP address (or a DHCP reservation).
You have some means of knowing your home’s IP address. In my case, my ISP has given me a static IP address, but you can use something like noip.com if you’re on a dynamic address. (Full instructions are available at that link.)
You can redirect ports on your home router and ideally add firewall rules.
You are able to use vi/vim. Sorry, but that knowledge is really beyond the scope of this tutorial, and you do need to use vi to edit config files on your NAS.
You have a public/private key pair. If you’re not sure what that means, read this.
Install VNC
There are a few different implementations of VNC. I prefer TightVNC for various reasons – in particular it has a built-in file transfer module.
When installing TightVNC on your home PC, make sure you enable at least the “TightVNC Server” option.
Check all the boxes in the “Select Additional Tasks” window.
You will be prompted to create “Remote Access” and “Administrative” passwords. You should definitely set the remote access password, otherwise anyone with access to your home network (e.g. someone who might have cracked your wireless password) could easily gain access to your PC.
At work, you’ll just need to install the viewer component.
Configure Synology SSH server
Within Synology DiskStation Manager, enable the SSH server. In DSM 5, this option is found at Control Panel > System > Terminal & SNMP > Terminal.
I urge you not to enable Telnet, unless you really understand the risks.
Next, login to your NAS as root, using SSH. Normally I would use PuTTY for this purpose.
You’ll be creating an SSH tunnel using your normal day-to-day Synology user. (You don’t normally connect using admin do you? Do you?!) Use vi to edit /etc/passwd. Find the line with your user name and change /sbin/nologin to /bin/sh. E.g.:
As part of this process, we are going to make it impossible for root to log in. This is a security advantage. Instead if you need root permissions, you’ll log in as an ordinary user and use “su” to escalate privileges. su doesn’t work by default. You need to add setuid to the busybox binary. If you don’t know what that means, don’t worry. If you do know what this means and it causes you concern, let me just say that according to my tests, busybox has been built in a way that does not allow users to bypass security via the setuid flag. So, run this command:
chmod 4755 /bin/busybox
Please don’t proceed until you’ve done this bit, otherwise you can lock root out of your NAS.
Next, we need to edit the configuration of SSH. We have to uncomment some lines (that normally being with #) and change the default values. So use vi to edit /etc/ssh/sshd_config. The values you need to change should end up looking like this, with no # at the start of the lines:
In brief these changes do the following, line by line:
Allow using SSH to go from the SSH server (the NAS box) to another machine (e.g. your home PC)
If you foul up your login password loads of times, restrict further attempts for 5 minutes.
Give you 6 attempts before forcing you to wait to retry your logon.
Allow authentication using a public/private key pair (which can enable password-less logons).
Point the SSH daemon to the location of the list of authorized keys – this is relative to an individual user’s home directory.
Having saved these changes, you can force SSH to load the new configuration by uttering the following somewhat convoluted and slightly OCD incantation (OCD, because I hate leaving empty nohup.out files all over the place). We use nohup, because nine times out of ten this stops your SSH connection from dropping while the SSH daemon reloads:
You need to have a public/private SSH key pair. I’ve written about creating these elsewhere. Make sure you keep your private key safely protected. This is particularly important if, like me, you use an empty key phrase, enabling you to log on without a password.
In your home directory on the Synology server, create (if it doesn’t already exist) a directory, “.ssh”. You may need to enable the display of hidden files/folders, if you’re doing this from Windows.
Within the .ssh directory, create a new file “authorized_keys”, and paste in your public key. The file will then normally have contents similar to this:
This is all on a single line. For RSA keys, the line must begin ssh-rsa.
SSH is very fussy about file permissions. Log in to the NAS as root and then su to your normal user (e.g. su rob). Make sure permissions for these files are set correctly with the following commands:
If you encounter any errors at this point, make sure you fix them before proceeding. Now test your SSH login. If it works and you can also su to root, you can now safely set the final two settings in sshd_config:
PermitRootLogin no
PasswordAuthentication no
The effect of these:
Disallow direct logging in by root.
Disallow ordinary password-based logging in.
Reload SSH with nohup kill -HUP `cat /var/run/sshd.pid`; rm nohup.out & as before.
Setting up your router
There are so many routers out there that I can’t really help you out with this one. You will need to port forward a port number of your choosing to port 22 on your Synology NAS. If you’re not sure where to start, Port Forward is probably the most helpful resource on the Internet.
I used a high-numbered port on the outer edge of my router. I.e. I forwarded port 53268 to port 22 (SSH). This is only very mild protection, but it does reduce the number of script kiddie attacks. To put that in context, while I was testing this process I just forwarded the normal port 22 to port 22. Within 2 minutes, my NAS was emailing me about failed login attempts. Using a high-numbered port makes most of this noise go away.
To go one better however, I also used my router’s firewall to prevent unknown IP addresses from connecting to SSH. Since I’m only ever doing this from work, I can safely limit this to the IP range of my work’s leased line. This means it’s highly unlikely anyone will ever be able to brute force their way into my SSH connection, if I ever carelessly leave password-based logins enabled.
Create a PuTTY configuration
I recommend creating a PuTTY configuration using PuTTY’s interface. This is the easiest way of setting all the options that will later be used by plink in my batch script. plink is the stripped down command-line interface for PuTTY.
Within this configuration, you need to set:
Connection type: SSH
Hostname and port: your home external IP address (or DNS name) and the port you’ve forwarded through your router, preferably a non-standard port number.
Connection > Data > Auto-login username: Put your Synology user name here.
Connection > SSH > Auth > Private key file for identification: here put the path to the location of your private key on your work machine, from where you’ll be initiating the connection back to home.
Connection > SSH > Tunnels: This bears some explanation. When you run VNC viewer on your work machine, you’ll connect to a port on your work machine. PuTTY forwards this then through the SSH tunnel. So here you need to choose a random “source port” (not the normal VNC port, if you’re also running VNC server on your work machine). This is the port that’s opened on your work machine. Then in the destination, put the LAN address of your home PC and add the normal VNC port. Like this:
Make sure you click Add.
Finally, go back to Session, type a name in the “Saved Session” field and click “Save”. You will then be able to use this configuration with plink for a fully-automatic login and creation of the SSH tunnel.
Now would be a good time to fire up this connection and check that you can login okay, without any password prompts or errors.
Using username "rob".
Authenticating with public key "Rob's SSH key"
BusyBox v1.16.1 (2014-05-29 11:29:12 CST) built-in shell (ash)
Enter ‘help’ for a list of built-in commands.
RobNAS1>
Making VNC connection
I would suggest keeping your PuTTY session open while you’re setting up and testing your VNC connection through the tunnel. This is really the easy bit, but there a couple of heffalump pits, which I’ll now warn you about. So, assuming your VNC server is running on your home PC and your SSH tunnel is up, let’s now configure the VNC viewer at the work end. Those heffalump pits:
When you’re entering the “Remote Host”, you need to specify “localhost” or “127.0.0.1”. You’re connecting to the port on your work machine. Don’t enter your work machine’s LAN ip address – PuTTY is not listening on the LAN interface, just on the local loopback interface.
You need to specify the random port you chose when configuring tunnel forwarding (5990 in my case) and you need to separate that from “localhost” with double colons. A single colon is used for a different purpose, so don’t get tripped up by this subtle semantic difference.
If you have a working VNC session at this point, congratulations! That’s the hard work out of the way.
It would be nice to automate the whole connection process. While you have your VNC session established, it is worth saving a VNC configuration file, so you can use this in a batch script. Click the VNC logo in the top left of the VNC session, then “Save session to a .vnc file”. You have the option to store the VNC password in this file, which I’ve chosen to do.
Before saving the session, you might want to tweak some optimization settings. This will really vary depending on your preferences and the speed of your connection. On this subject, this page is worth a read. I found I had the best results when using Tight encoding, with best compression (9), JPEG quality 4 and Allow CopyRect.
One batch script to rule them all
To automate the entire process, bringing up the tunnel and connecting via VNC, you might like to amend the following batch script to fit your own environment:
@echo off
start /min "SSH tunnel home" "C:\Program Files (x86)\PuTTY\plink.exe" -N -batch -load "Home with VNC tunnel"
REM Use ping to pause for 2 seconds while connection establishes
ping -n 2 localhost > NUL
"C:\Batch scripts\HomePC.vnc"
I suggest creating a shortcut to this batch file in your Start menu and setting its properties such that it starts minimised. While your SSH tunnel is up, you will have a PuTTY icon on your task bar. Don’t forget to close this after you close VNC, to terminate the tunnel link. An alternative approach is to use the free tool MyEnTunnel to ensure your SSH tunnel is always running in the background if that’s what you want. I’ll leave that up to you.
DSM Upgrades
After a DSM upgrade, you may find that your SSH config resets and you can no longer use VNC remotely. In that eventuality, log into your NAS via your LAN (as admin) and change the config back as above. You’ll also probably need to chmod busybox again.
root locked out of SSH?
For the first time in my experience, during May 2015, a DSM upgrade reset the suid bit on Busybox (meaning no more su), but didn’t reset the PermitRootLogin setting. That meant that root could not log in via SSH. Nor could you change to root (using su). If you find yourself in this position, follow these remedial steps:
Go to Control Panel > Terminal & SNMP
Check the “Enable Telnet service” box.
Click “Apply”.
Log in as root, using Telnet. You can either use PuTTY (selecting Telnet/port 23 instead of SSH/port 22) or a built-in Telnet client.
At the prompt, enter chmod 4755 /bin/busybox.
Go to Control Panel > Terminal & SNMP
Uncheck the “Enable Telnet service” box.
Click “Apply”.
Do make sure you complete the whole process; leaving Telnet enabled is a security risk, partly because passwords are sent in plain text, which is a Very Bad Thing.
Conclusion
So, what do you think? Better than LogMeIn/TeamViewer? Personally I prefer it, because I’m no longer tied to a third party service. There are obvious drawbacks (it’s harder to set up for a start! and if you firewall your incoming SSH connection, you can’t use this from absolutely anywhere on the Internet) but I like it for its benefits, including what I consider to be superior security.
Anyway, I hope you find this useful. Until next time.
This website uses cookies to improve your experience. I will assume you're ok with this, but you can opt-out if you wish.AcceptRejectRead More
Privacy & Cookies Policy
Privacy Overview
This website uses cookies to improve your experience while you navigate through the website. Out of these, the cookies that are categorized as necessary are stored on your browser as they are essential for the working of basic functionalities of the website. We also use third-party cookies that help us analyze and understand how you use this website. These cookies will be stored in your browser only with your consent. You also have the option to opt-out of these cookies. But opting out of some of these cookies may affect your browsing experience.
Necessary cookies are absolutely essential for the website to function properly. This category only includes cookies that ensures basic functionalities and security features of the website. These cookies do not store any personal information.
Any cookies that may not be particularly necessary for the website to function and is used specifically to collect user personal data via analytics, ads, other embedded contents are termed as non-necessary cookies. It is mandatory to procure user consent prior to running these cookies on your website.