31 May 2020 Weeknote

Habits, as I’ve learned over the past few months, are a good thing. They’re also something I resisted like the plague in the past owing to a misplaced idea that creativity and all things good came out of spontaneity not repetition.

I’m not sure I even believed that myself at the time, so it’s good to get it out there and over with.

One habit that I’ve been meaning to get into for about fifteen years is the habit of blogging regularly, something that I really haven’t done for a good ten years, possibly more. Prompted by the appearance in my feeds of this post from the redoubtable Ben Hammersley (were you twiddling with your feeds, Ben?) I’ve decided that a regular weeknote of my own will be in order.

Unlike many weeknotes, there won’t be much about work in these. There’s a couple of reasons for that: first, much of what I do these days is connected to people management, or in some way sensitive to the business. I can’t really write much about that, although I might write more generally about digital publishing every now and then (I have, as you can guess, opinions.)

So the focus is going to be a bit more personal. I hope that’s OK.

Covid exhaustion

In common with a surprisingly small percentage of the population I’ve had the coronavirus in my system. For me the symptoms at the time I got it were minor: a raised temperature for a whole day, a very intermittent cough for a couple of days, a couple of other things. The biggest impact was tiredness, which was like nothing I’ve ever encountered before. I felt ill first thing in the morning, so I started writing an email to my manager – and something which should have taken me five minutes to compose took nearly half an hour. I couldn’t concentrate, and my eyes just started closing.

I promptly slept from 9am till 9pm, when I woke up and soon enough fell asleep again.

Since then tiredness has been an ever-present factor in my life, and I’ve learned to manage it so I get the most important things done in the morning. This week, that feeling has been back with a vengeance. I don’t know if this is the start of some kind of post-illness fatigue syndrome or what, but at one point in the early evening I was lying on the small sofa and was so tired that I literally felt my arms slump to my side as I passed out.

I’m hoping this will pass, but if not, I’ll embrace it and just get up earlier. You can only do what you can do.

Big-picture productivity

A few weeks ago I signed up to Pater Akkies’ “Big Picture Productivity” course, and I’ve just completed the second module. I discovered Peter through YouTube – where I discover 90% of people these days – where he’s put a series of really nice videos on setting up Things, OmniFocus and some other tools.

The course is really good, and I’d recommend it to most people. The modules released so far have been on the basic productivity strategies of thinking about your values and roles then working through what your goals are. Once you’ve done that it’s time to work out what the actionable projects are which lead to your goals.

I like Peter’s avoidance of the SMARTER framework which everyone uses for goals. One thing that I’ve come to understand is that some goals don’t have an end: for example, a goal of reducing your carbon emissions isn’t ever going to end, but it’s still a goal. The projects you put together to achieve that should have something closer to a smarter framework, but the goal itself can be ongoing.

Peter has also finallygot me using Notion, which I’ve resisted for a long time. A notes app that is really a database sounds too much like the kind of thing that I would spend about a year tinkering with to get it just so while never actually using. But Peter’s course shows you how you can use it to track goals and projects in a way that I really like. I’ll still use my Bullet Journal for my day-to-day note taking, but when I need something more serious I can see how Notion fits in.


The other big discovery of the week is Tot, which I’ve written about extensively already so I won’t dwell on it too much. However, it’s a great example of an application which uses a limitation as a fundamental feature to nudge someone towards a better behaviour. We need more of that.


For some reason I ended up listening to “I’m a tree” by Imani Coppola AKA the single that almost sank her career. After having a big hit with her first song, top 20 worldwide and all that kind of thing, this one was released and promptly charted… absolutely nowhere. Well, it scraped the top 200 in Australia.


Grayson’s Art Club is of course fantastic. There’s a long-running battle in our house over which of is Grayson and which is Philippa. I’m also really enthused by the amount of talks that museums and art galleries are making available virtually while we’re all stuck at home, plus the new range of plays and ballets available on YouTube. At least, the internet is enabling democratic culture.

As well, of course, as resurgent nationalism, but we will talk no more of that.


I hadn’t heard of Tot before I read MacStories’ article about its new share extension in iOS, but when I did I was intrigued. And when I used the Mac version I knew it was something I really wanted on my Mac.

At its heart, Tot is a scratch pad. It’s just a place to jot down little snippets of text, often that you will use elsewhere.

There’s only seven documents, called dots, represented by – you guessed it – a series of dots along the top of the window. If a dot has text in it, it has a colour fill (you can change this for accessibility purposes – a nice touch).

This conceit of seven and only seven possible “documents” is what makes Tot so good. It places a limitation on what the user can do which nudges you towards a particular kind of behaviour. Applications like Drafts or Apple Notes allow you to keep on making more and more new documents and that encourages you to never actually look back on what you’re written.

The seven-dot limitation of Tot means you can’t do that: if you keep taking notes, as soon as you hit that seven dot limit you’re going to have to go back through what you’re written and either delete something or, if it’s still valuable, move it elsewhere.

There are some other cute little interface touches, all of which remind me quite why I love the Iconfactory’s software. You can have Tot set up either as a menu bar icon or a dock icon. If you have it in the dock, the icon changes to match the colour of the front-most dot.

You can set a keystroke to invoke it on the Mac and there’s a smart set of keyboard shortcuts which let you move forward and back through dots without taking your hands off the keys. You’d be surprised how many text applications don’t have proper navigation like this. There’s also, I’m pleased to say, Touch Bar support.

The Mac version is free: the iOS version is $20. That sounds like a lot for an iOS app, but in the great history of what you can charge for software it’s peanuts. I paid more than that for ridiculous shareware games in the past. And as Mike Schmitt on Sweet Setup points out, for an app this simple a subscription model just doesn’t make any sense.

And the iOS version is excellent, working exactly how you would expect it. If you’re using an iPad with a keyboard then you will find all the keyboard short cuts you have on the Mac version. To switch between dots, you can just swipe across the screen with a single finger. Again, it’s simple, but you can see and touch (literally) the thought that has gone into making it easy.

The iPadOS version really comes into its own when used in a Slide Over window. It’s ideal in this kind of scenario. Of course you can use it full screen, or split view, but when you have it in Slide Over you can see the screen and take notes easily from what you’re working on, or just drag and drop text or links from your “main” view.

The key question with any new software, though, is “what can you actually use it for?” For me, it’s all about jotting down random thoughts and ideas that I’ll take and do something with later – this blog post started life as a set of jottings in a single dot, and then moved to Ulysses once I thought I had enough to start writing a full blog post on it. And the nice thing is that when I exported to Ulysses, all my links and formatting just dropped right in.

COVID 19 is tailor made for our culture

I should start with this: I’m not an expert. You should listen to those that are.

COVID is an almost perfect virus. It rarely kills its host. Unlike its distant relative MERS, which makes people ill fast and kills them before they get chance to infect many others, it creeps up on you.

In fact, for the majority of sufferers, they will remain ambulatory. They may have outward signs, like a cough, but they may not – and our culture has trained us to keep going if we feel under the weather, to ignore symptoms.

It hits us when economically we’re weak to it. Zero hours contracts mean there is a pool of people who have no choice but to keep working, and a set of businesses that are built around the idea that you don’t have to keep people on staff. If you’re under 50, you’ve never really experienced a dangerous infectious disease that spreads like this. Yes, there was HIV, but that could mostly be avoided. COVID can’t.

But it also hits us when we’re mentally unprepared.

I’m 53, and as a child I was vaccinated against two things: smallpox (one of the last wave of children to get the smallpox vaccine); and polio. I got my immunity (such as it is) to measles, mumps, scarlet fever and German measles the hard way, by contracting the disease. And I remember the steps my parents had to take to keep me isolated (no playing outside, stuck in my bedroom, no friends visiting, EXTRA COMICS) because some of those diseases could kill other children. And, of course, could have killed me, although their undoubted worry didn’t register at the time.

If you’re younger than me, you’ve grown up in a world where most of the major childhood infectious diseases didn’t exist: you’re used to infection being something that either you didn’t have to worry about (colds, seasonal flu) or affected someone else, somewhere else.

And if you’re older than me (OK, boomer)… well, you should know better.

The generations currently alive are probably the first in history not to have anyone who remembers the last global pandemic in them. The influenza of 1918 was a distant memory to my grandmother, born in 1910, but for anyone of my mother’s age or younger – everyone currently alive – the danger of pandemic has faded from the collective memory.

Having lost the folk memory, all we have to keep us cautious and keep us alive is the knowledge of experts, and yet we also live at a time when major Western countries have turned away from an understanding of the important of expertise. Brexit, the pride in ignorance that characterises Trumpism, all show us that the respect for expertise which built post-war prosperity has vanished. Even amongst my generation, the notion of the “wisdom of crowds” tell us that while everyone can’t be an Einstein, if we all click our heels and wish three times, a hundred of us can add up to one.

No one is going to crowdsource a new treatment for COVID. Wikipedia isn’t going to discover a vaccine.

Social media allows accurate information to pass faster than before, which would be a ray of hope were it not for the fact that rumour, speculation and outright lies spread faster. The old, early internet idea that “good information drives out bad” is probably still being touted by the Digerati somewhere, but it’s really now pretty laughable.

And of course the news that your local supermarket is running out of bread can spread faster than ever, letting the well-off drive down in their cars and buy up the last remaining stocks to put in their chest freezers, while the poorer wait for a bus and find shelves empty. We have even forgotten that “panic buying” doesn’t mean everyone gets a fair share, it means that the poorest and weakest will go hungry.

Never has a culture been less prepared for a pandemic, and never has a virus had a better chance to become endemic in a population. COVID almost seems tailor made to capitalise on every single weakness in our culture, from expert denial and anti-vaccine madness to our lack of experience of pandemic to the way our economy is structured. I said earlier it was almost perfect. I was underplaying it. I think it actually is the perfect virus for our times.

But it’s not hopeless, and life will go on. These are obstacles, and it is down to each of us as individuals to use them as ways to improve ourselves, to do what we can for others, to make ourselves better people for the experience. “Amor Fati”, as the Stoics said.

Scamware, malware, viruses. Who cares?

John Gruber:

Computer viruses are called viruses because like biological viruses, they spread by themselves. What Malwarebytes is talking about are scam apps — things that trick or otherwise convince the user to install voluntarily. Dan Goodin had a piece at Ars Technica last month about the scourge of fake Adobe Flash installers — which work because unsophisticated Mac users had been truthfully told they needed to upgrade their version of Flash for a decade. It’s a real problem — but third-party antivirus software is not the answer. As usual, Tsai has a wonderful compilation of links to commentary on the matter.

Sigh. I can’t believe John is still making this distinction as if it matters. The vast majority of malware on Windows and pretty much any platform is scamware, not viruses. This has been true on Windows for probably a decade, maybe longer.

What matters is, as I argued 12 years ago, that the Mac is now a large enough target to bother creating malware for. There’s money to be made out of those Mac users, particularly the ones who bought the line that the Mac is immune from malware.

Thinner, lighter, faster

John Gruber, on the “thinner, lighter faster” Galaxy Book S compared to the MacBook Air:

Well, there’s the small notion of, you know, the operating system. And let’s see if it really does get 25 hours of video playback. But the point stands. A lot of people using MacBooks today aren’t devoted to the MacOS experience, and might switch, based on hardware alone. The ARM revolution for notebook PCs is coming, whether Apple is ready or not.

John’s right that a large chunk of people using MacBooks today aren’t devoted to macOS. But… macOS also just isn’t as good as it used to be. That’s not about software quality, something that bothers technical users more than ordinary ones. It’s just that Windows 10 has got better, to such a degree that unless you’re bought into the whole Apple eco-system there’s not much point in going for a Mac.

The Mac is now Apple’s weakest link.


I was an early user of ChromeOS. Not as early as David Ruddock, who has written a post on how Google’s flagship desktop operating system has stalled, but as soon as the first commercially available Chromebook was out. I was in. Since then I’ve always had a Chromebook in my life, and usually it’s been a pretty high-end one. Sometimes, as was the case with the Pixelbook, they have spent quite a chunk of time as my main or even only laptop.

I’m pretty sure that I made a similar argument to David’s a few years ago. When rumours that ChromeOS and Android were going to merge, or that Android apps would come to ChromeOS, I was not only sceptical, but actually antagonistic towards the idea. Android apps would mean less focus on web apps, and that means less focus on what ChromeOS is really, really good at: the web.

I think, though, that David’s piece doesn’t really focus on what Google was trying to achieve with ChromeOS and so he misses the mark. To understand that, you need to look at what people actually use computers for now.

First of all, it’s important to split how people use laptops between work and home. David says:

“I say this even as one of the few people who can do 95% of my job on a Chromebook: that 5%, when you really, really need it, is more than enough reason to avoid a platform entirely. And for many others, it’s much more than 5%: it’s their entire workflow.”

Actually, probably 90% of workers who use a laptop can do their jobs on a Chromebook. If your life revolves around office applications – and that’s most people who use a computer for work – then web apps are not only fine, in many cases they are the only option on a laptop. The Microsoft Office suite is a first-class citizen on the web, and some of Microsoft’s PWA’s are excellent. I’ve known Office 365 deployments where people don’t even bother to have the desktop apps installed. Salesforce… does it even have a Windows or Mac desktop version? It’s all about the web.

In that environment, ChromeOS is fine – and it has been for years.

Of course, there are plenty of people, typically in the creative industries, who require applications that either don’t exist on the web or where the web apps just aren’t good enough. Because journalists work in this area they often think everyone does.

But as Benedict Evans has repeatedly pointed out, there are seven million Adobe Creative Cloud subscribers out of the 1.5bn PCs in the world. That’s less than 0.5% of all computer users. Adobe isn’t the be-all and end-all of creative software or of software you can’t do with web apps, but even being generous it’s hard to make the case that the total number of PC users who can’t live in a world of web apps gets to more than between 2-5% of users.

Home users are a little different. Yes, there are home users who fall into that nebulous category of “prosumer” – the kind of people who do “proper” photography or video editing, and want/need Photoshop or Premiere/Final Cut to do that. But again, they’re rare. The majority of the world’s photographs and videos are created, edited, published and viewed on smartphones, never touching a PC at all.

The one thing that lots of home users do with laptops which Chromebooks struggle with is games, and even here the majority of casual gaming is phone based. This is where having Android app support for ChromeOS makes sense: being able to play the huge range of casual and less casual mobile games on a ChromeOS laptop would be awesome. Sadly though – and this is where David’s piece is correct – to do it properly you need the developers to have optimised for larger screens, and to put it bluntly most simply haven’t bothered.

Remember I said that for web apps, “ChromeOS is fine – and has been for years”. That’s actually Google’s biggest challenge. In the space of nine years, ChromeOS has run slap into the same challenge that it took Windows and MacOS three decades to get to: there really aren’t a lot of improvements to make. Of course, ChromeOS needs to keep pace with web APIs and – in true Google style – push them forward. And there are definitely areas for improvement, like biometric support (as David highlights). You can improve the interface, as all OS’ should.

But what else do you want ChromeOS to do, other than be the best platform for running the web apps which 90% of users care about?

This post was written and posted on an iPad Pro, another device where you can’t do any creative work and that isn’t really suitable for professionals.


January 2nd, or “the death of goals day”. You spend the first day of the year thinking about all the things you want to achieve and by this day you’ve broken your promises to yourself, forgotten about the things you want to get done, and wondered at what point in your life one day hangovers turned into two day hangovers.

Don’t worry. We have all done this.

I have an ambivalent relationship with goals. On the one hand, goals add value and, if chosen well, meaning to your life. On the other hand, goals can be stifling, endless wells of disappointment and failure.

If you’ve ever done any kind of personal development course you’ve probably been told to make goals SMARTER (the personal development industry loves an acronym). And for some things, SMARTER really is better. If you want to pay off your credit card debt then being specific about when you’re going to do it and making sure it’s an achievable thing is good idea.

But not everything you want to do in life is SMARTER. And the peril with trying to apply the same framework to everything you want to do is that you end up feeling bad when you start something only to find out you don’t care enough about it to actually keep going.

Sometimes, you need to start small and just find out if something is right or possible for you. For example, I’m interested in photography, and – as I don’t have any hobbies and know I need something other than work in my life – I want to see if it’s something I enjoy. I could set myself a SMARTER goal on this: “By the end of January I will have found and enrolled in a photography course”, say. But suppose I look around, read some course descriptions, and find that actually it all sounds dreadful and I can’t get excited about it? Have I FAILED in my goal?

The best advice I’ve seen about goals is Ryder Carroll’s in The Bullet Journal Method, where he talks about splitting goals into sprints, little two week chunks which achieve something in themselves – each sprint is complete in that sense – but achieve something within that overall goal. And importantly, if after a sprint you find you want to head off in a different direction from where your goal was leading, that’s fine.

Sometimes, all a goal is is an opportunity to learn about something and maybe take it further. So start small, and be forgiving. Never forget that it’s fine to decide that something which looked important to you before you started is not something you want to pursue.

Everything you do is a chance to learn.  The only time you learn nothing is if you do nothing.


How often do you think about what makes you happy? Not “how often are you happy?” but how often do you reflect on the things which make you happy and try and learn something from that? 

If you’re anything like me, which you probably are, then the answer is “almost never”. 

Yet it’s only by doing this kind of reflection that we can understand happiness and try and make our lives better.  

One of the most limiting parts of our culture is that we learn what happiness looks like not through self-reflection but through watching other people. Whether that’s watching Love Island and imagining ourselves looking like or being like the people on it, or it’s watching YouTube vidoes about some guy in LA with the most uber-minimal life you’ve ever seen, it’s the same: “This person does this stuff and is happy. If I did that stuff I’d be happy too”. 

We’re all guilty of this. It is at the crux of our mediated lives. 

So take some time to think about the things that make you happy. Think more on what it is about those things that you love, that (to steal a Marie Kondo-ism) sparks joy in you. They can be big or small: one of mine is simply watching the birds on the bird feeders outside our living room window. Why does this make me happy? Because I love nature and the natural world, so I want to get into it more.  

Today’s a good day to think about this.  

So I got a Surface Pro X

Earlier this year I bought myself a shiny new Mac. This was the first Mac I’d bought since 2015, when I bought the 12in MacBook, a machine which lasted me four years but which was starting to struggle a little with battery life and a few other things.

The Mac I chose was one of the new retina MacBook Airs, the base model. As you can guess from the fact that a Core m3 MacBook was capable being my main machine for four years, I’m not a particularly demanding power user. I’m not running Photoshop, I don’t have to compile code. Mostly, I write, I do spreadsheets, and I browse the web. Mostly spreadsheets.

The Air is nice. Having TouchID built in is great, the keyboard is adequate, it’s fast enough and although it only has 128GB of storage, in the age of cloud applications and sync engines that are smart enough to work out what to sync and what you don’t ever use, that’s actually not as much of an issue for most people as you’d think.

But I don’t love it. Unlike the MacBook, which for at least two of the years I used it was an absolute darling of a machine, the MacBook Air has just always felt a little half hearted. There’s nothing wrong with it, but it’s probably the first Mac I’ve ever owned that I just don’t love.

Meanwhile on the other side of the Mac/Windows divide, Microsoft has released the Surface Pro X, and it’s as if they basically created a computer purpose-built to press all of my technology lust buttons. It’s light and portable – I love light and portable – and at the cutting edge thanks to its ARM processor. It’s got LTE built in, something I’ve been desperate for on a Mac for years, and something that made me use my iPad a lot. Oh and it’s a tablet too, and I absolutely adore tablets.

I don’t think I’ve prevaricated about buying a computer more than over the Surface Pro X. On the day it was announced, I added it to my basket on the Microsoft store. On the next day, I took it out. Repeat that three times and you’ll have an idea of how much I agonised over whether to buy one or not.

The reviews, when they came out a week or so before release, should have made up my mind that this was not a device for me. And yet… Reader, I bought one. And not just the lowest end one: I went for the fully tricked out 16GB of RAM and 512GB of SSD version, because if this thing is going to be my main computing device for a few years, I want some future proofing.

Pricey. No, REALLY pricey

You are of course paying through the nose for all this. Even for the base model, you’ll barely get change out of £1200 once you’ve bought the keyboard (yes, you need this) and pen. Even if you’re lucky enough to get an education discount, you’re still looking at a machine that starts expensive and moves quickly to the level of pricing that will make trigger automatic offers by your credit card company to raise your limit. If you want the fully-loaded 16Gb of RAM and 512GB storage version, you’re going to pay the best part of £2,000.

What you get for that money is basically what all computers should be like in 2019. Silent, always connected, light, with a great typing experience and an amazing screen. Usable in a variety of modes, and equally adept in all of them. It’s a computer that makes you feel productive.

I’ve always believed that Apple makes the best hardware in the business, and in some areas this is true (the new iPhone 11 is shockingly good). But when it comes to computers, Microsoft now beats it, pretty easily. It’s arguably better than the design of the iPad Pro, and I love that device.

Hardware without software is a paperweight

As an iPad user allow me a moment of schadenfreude: One of the biggest criticisms of the iPad from the Windows community has been its failure to run “real” desktop apps “like Photoshop”. Now, the leading edge of Microsoft devices is something which also can’t run Photoshop.

The good news is that the vast majority of applications that I use on a day-to-day basis are already ported to 64-bit ARM, which means I get good performance out of them. Using ARM applications also improves the battery life: whatever emulation system Microsoft is using to run Win32, it pushes the processor hard enough to significantly decrease how long you’ll be using the Surface Pro X without plugging it in. It’s still not bad – but you’ll definitely get a better experience if you can go ARM-only or ARM-mostly. 

By and large, if you’re using ARM applications you’re going to find the experience of using the Surface Pro X really positive. That means most of Microsoft’s own apps, including – of course – Office (but weirdly not Teams), plus some staples like Spotify and WhatsApp Desktop.

The Surface Pro X can also run 32-bit Intel apps… but “run” is sometimes a generous way of putting it. It’s hit and miss whether an app will run properly, and if it does run at all, it can be prone to random freezes and crashes.

iTunes is a good example of this. It’s a Microsoft Store app, and it’s Win32, so it should work fine. But whenever I used it, it would randomly lock up while trying to do innocuous things like switching to a playlist. 1Password works, but the desktop client is slow and painful to use.

However, there’s now an ARM version of the new Edge browser, and Edge lets you run pretty much any web site as an application that appears in your task bar, you can run apps like Apple Music and Slack as web apps rather than their native equivalents. Because Microsoft has got into Progressive Web Apps (PWAs) in a big way, this is actually a good experience. And 1Password has a Chrome extension version – 1Password X – which is almost as fully-featured as the client, which means you can run it as part of the browser since Chrome extensions work well with the new Edge.

The right device for me… but maybe not for you

If you’re the kind of person who lives in Office 365 and browser-based applications, you’re going to find performance on the Surface Pro X is good, battery life is excellent, and built in LTE is wonderful. Microsoft has built the best device for experiencing its own software, and if you live a Microsoft life you’re going to love it.

This describes my computing world at the moment, so unsurprisingly I love this device. Ironically, it was my experience with the iPad Pro that helped me understand the Surface Pro X could work for me. I’ve never had a problem using the iPad Pro for almost all my work and a sizeable bit of play. I’m mostly deep into the Office 365 ecosystem, and the iPad Office apps are good. And I also knew from the iPad that ARM is capable of more than enough performance to support everything I want to do.

And remember too that for three years a MacBook was my everyday carry work machine. I’ve lived the life of USB C-only for a while. I know that performance is less important to me than lightness. The iPad taught me that integrated LTE and an all-day battery was also very high on my list of requirements.

All computers are inherently a compromise between size, design, performance, mobility and ease of use. You can buy a beast of a gaming “laptop” that weighs a tonne but absolutely screams at

So does anyone want to buy a barely-used MacBook Air?

The Pixel Slate and why I still really love it

Google Pixel Slate

The Pixel Slate has had a rough ride. When it was initially revealed, there was a level of excitement around it which built a big bubble of hype. When it was finally released that bubble burst, spectacularly.

The Slate had a lot of problems. There were performance issues in tablet mode, which meant doing something as simple as having two windows snap side by side was a laggy experience. There was the design of the Keyboard Cover, which made the device almost impossible to use in your lap unless you had the kind of length of thigh bone that would place you into NBA levels of height (or extremely weirdly shaped legs).

But the biggest and most damaging question was one that should have been obvious to Google prior to launch: why should you buy the Pixel Slate when the Pixelbook still exists, and even though it has an older processor and bezels as wide as the Grand Canyon it actually did everything that anyone would want from a Chromebook, in a more familiar form?

I have both a Pixelbook and a Pixel Slate. I had expected the Pixel Slate to effectively replace the Pixelbook. All the things the critics said are true about the Slate, and that’s why I never sold the Slate — but despite the Pixelbook being, in many ways, a superior device I still love using the Slate, and just when I think I’ve put it down for good it finds a way to sneak back into my life. 

Why? The first reason is the screen — and no, I’m not talking about the bezels. I can live with the bezels on the Pixelbook, and the reality is that those on the Slate are not that much smaller. But the screen itself is genuinely joyful to look at over a long period of time. The only screen that I’ve ever used that rivals it is the iPad Pro, which is the best screen on the planet for ordinary use at the moment. 

Images on the screen are soft without lacking sharpness or clarity. Colours are absolutely perfect. I can look at this screen for a long, long time without wanting a break (note: I do take breaks anyway, and you should too!)

The second reason the Pixel Slate keeps luring me back is the keyboard. Yes, I know: you can’t use it in your lap, and it has to lie flat on a table because there are no clever magnets to keep it angled up like there are on the Surface Pro keyboards.

But the keys themselves are brilliant to type on. Round keys take a little while to get used to, but Google’s user testing was right. I find myself making less typing errors on it, and when you type all day that adds up to a lot of time spent not doing corrections. 

Also, one for Apple: find a way to steal the bit from this keyboard that lets you put it at any angle. The two angles on the iPad Pro keyboard are a big improvement, but they just don’t cut it when you’re used to being able to work at any angle, as you can on the Pixel Slate.

I can’t speak for the lower end versions but the Core i5 that I have in my Slate performs perfectly well now, in laptop or tablet mode. Everything feels as snappy as you would expect. And battery life is fine: you can never have enough battery life, but the Slate will get me through a full typical working day. 

I think the biggest problem the Slate faced was that Google never really answered a simple question for themselves: Why make a tablet at all? What role is this device going to play in someone’s life? If it’s going to replace a laptop (or even “replace a laptop 80-90% of the time”) then it needs to be better than a laptop at a wide range of tasks. The iPad is the best example of this: it’s got a better screen, it’s easier to use, the battery lasts and lasts, it’s more powerful given the price, it has really good software, it integrates brilliantly if you have an iPhone, it’s hugely better for consuming content, and thanks to that processor power and easy to use software  it’s often better for consumer-level content creation. 

And yet… I doubt I’ll ever regret buying the Slate. It’s a big barrel of contradictions and half thought out ideas, but it’s also just one of the most pleasant and compelling devices I own. Even when it’s frustrating you over some little thing, it’s a joyful piece of design that misses the mark in places but delivers in others.