This post contains a few details on implementing a Quick Access Device Control for Android 11+. As of writing (Jan ’21), this is a relatively new feature and there aren’t many support resources out there for it. To help address this in some small part, here are a few gotchas I stumbled on while implementing it for a smart lights app I’ve been working on.
The first step, if you haven’t done so already, is to read through Google’s developer guide on this feature:
As far as Google’s docs go, this one isn’t too bad. It gives a overview of the most important points. However, there are some finer points it doesn’t address which you’ll most likely run into as you start to code.
With any luck, you might find answers for these in the helpful Q+A section that follows!
Q+A / Gotchas
Q: What should I put for MY_UNIQUE_DEVICE_ID? Does it need to be globally unique to prevent conflicts with device controls for other apps?
Device IDs should identify not just a type of device (e.g. “Philips Hue bulb”) but an instance of a particular device, and remain stable over time. End users add devices to the quick access section for an app, and the platform remembers which devices they’ve added using this unique device ID.
As far as I can tell device IDs are scoped to your app, so you don’t need to worry about your IDs conflicting with those from other apps. I.e. you don’t need to create some kind of fully-qualified ID including your package name.
Q: What is a template ID and why do I need to provide it?
The quick access device control API makes use of reactive streams of immutable objects to update its UI. Because of this, it needs a way of identifying a particular template over time, independent of its object identity. This is where the template ID comes in. It lets the platform code know that two template objects with different properties (say a different title or range value) are actually referring to the same UI widget if their template IDs match.
Q: How should a communication error with my device by signaled back from performControlAction()? Via the existingPublisher for the device or the Consumer<Int> parameter?
Via publisher. The Consumer parameter is just for signalling that the control action was received successfully, not necessarily that it was delivered to the device.
Q: How do I know when it’s safe to disconnect from the device or cleanup other resources?
Add a doOnCancel()listener to the Flowable returned by createPublisherFor(). It will be called when the Quick Access Device Controls UI goes to the background.
Long build times for software applications can be a significant business cost. Every minute that software developers spend waiting for the software to build is almost* pure waste. When this is multiplied across a number of developers, each running a number of builds per day, it quickly adds up.
To make this cost more tangible, imagine an enterprise mobile app that takes 5 minutes to build. Let’s assume that there are five mobile developers in the team running an average of 24 builds per day – that’s 4 builds per hour* over 6 hours of coding time in a day. This works out to 10 hours wasted each day.
This means that if developers are paid $100/hour, a five minute build translates to $1000 per day wasted, or $5000 a week.
This is more than the cost of hiring another developer(!)
It seems likely that this cost is under-appreciated by managers. The activity of building the app is mostly invisible to them. Kicking off and waiting for builds is something developers do, most of the time.*
Since slow builds are also an opportunity for significant cost savings, this lack of visibility isn’t a good thing. If their impact is hidden from management, it’s less likely that improvement work will be prioritised, and these savings will go unrealised.
To make matters worse, there are good reasons to think that the costs (and potential savings) of slow builds are much higher than the basic calculation above suggests.
Second order effects
Earlier we assumed that one minute of build time equals one minute of lost productivity – in other words, a linear relationship. This would mean that developers become 100% productive again as soon as a build finishes, regardless of how long they spent waiting.
In reality, this is probably a gross underestimate because it ignores second-order effects. For example, most developers would probably agree that the longer they spend waiting for a build, the longer it takes to get back into a fully productive state again. The real relationship between build time and lost productivity is likely to be nonlinear – i.e. not a straight line.
Could a more solid case be made for this theory? – that business costs increase nonlinearly with average build time?
Investigating this possibility turned up a study from Parnin and Rugaber (2010) from the Georgia Institute of Technology. They found that after a programmer is interrupted, it takes an average of 10-15 minutes for them to start editing code again. If build times past a certain length represent a kind of “interruption” (or lead to other kinds of self-interruptions) this would straight away make their cost nonlinear.
The question then becomes: how long does a build need to take before it can be considered an interruption? For example, how long before the developer’s mind starts to wander and they’re no longer in a flow state (or anything close to it)?
As a starting point we could agree that once the developer switches to another task, such as checking email or social media, they are most definitely “interrupted”.* To get a rough idea of how long it takes for this kind of thing to happen, I ran the following (very rigorous) straw poll:
The results suggest it doesn’t take long at all. 85% of the people that responded said they are very likely to distract themselves if a build takes anywhere between 5 seconds to a minute.*
If this result is combined with the findings of a 10-15 minute cost of interruptions, a grim picture starts to emerge. Once build times exceed a certain length, their cost would jump way up.
Attempt at an improved cost model
Let’s model this with some real numbers to get a clearer picture.
We’ll call the time it takes some random developer to get distracted while waiting for a build their distraction threshold. Going by the straw poll, the distribution of these across the population of developers looks to be right-skewed. That is, most developers get distracted quite quickly, with the number quickly dwindling off from there to form a right-hand tail. By the 5 minute mark, 100% of developers are distracted.
The distraction thresholds of 10,000 random developers might look like this:
Now let’s work out how this could impact the productivity cost of build times.
To do this, let’s take the 1000 randomly generated developers from above. Then for a selection of different build times, work out the productivity cost incurred by each of them. That is, the total amount of time it takes from starting a build to being back in a productive state.
If the build time falls under the distraction threshold for a developer, we’ll assume that the productivity cost is simply the time spent waiting for the build.
However, if the build time goes above the distraction threshold, we’ll add an interruption cost on top of this. To model this extra cost we can use the findings from Parnin and Rugaber (2010). Their results on the time it takes a developer to start editing code again after resuming from an interruption (from 1,213 programming sessions) were:
Based on this table there’s a 35% chance of the interruption cost being less than one minute, and a 22% chance of it being between one and five minutes, etc.
Note that the percentages don’t add up to 100%. There’s a missing column for the 8% of sessions where the edit lag was 30-360(!) minutes.
Running the simulation for each of the 1000 developers, and for every build duration between 0 and 5 minutes (in half second increments – e.g. 0, 0.5, 1, 1.5 seconds…), gave these results:
This plot is difficult to make much sense of at a glance. Most of the plot area looks to be dominated by random noise. This large noisy area is the developers incurring the full interruption cost of 30-360 minutes.
However on closer inspection, it does look this full interruption cost occurs less often in the 0-1 minute build time range – the points aren’t so dense here. This is what we would expect given the distraction threshold distribution.
Reassuringly, the high density of points falling between the 0-30 minute cost range (i.e. the solid blue band) also implies that most of the costs fall in this range.
Rather than plotting the raw data, we can get a clearer picture of what’s going on by only looking at the average cost incurred at each possible build time. This way we’ll have only one point to plot for each half-second, rather than 1000.
Now it’s much easier to make out a curve in the relationship between productivity cost and build time. This is even clearer if a regression line is fitted to the data:
The curve appears to be S-shaped: relatively shallow and linear between 0-15 seconds, rapidly curving upwards after that, and only levelling off again around the 4 minute build time mark. The cost increases are fastest for builds about 15 seconds to 2 minutes long.
We can also see that the productivity cost becomes more variable (less predictable) as build time increases.
What would some of the business implications of this nonlinear relationship be? What can you take away from this if you’re managing or participating in a software development team?
On one hand, it means that slow builds probably cost a lot more than you would expect. Rather than one minute of build time equally roughly one minute of lost productivity, the real cost could be up to 8 minutes per minute of build in the worst case (the steepest part of the curve on the graph above).
The good news is that the potential cost savings are also a lot higher than you might expect. This is particularly true if the software’s build time is in the “danger zone” for developers becoming distracted – say between 30 seconds to 2 minutes long. Any investment into reducing the build time here should give disproportionate returns.
To illustrate this, assume a company’s app takes 2 minutes to build. If management decides to invest 6 developer-hours into improving the build time, and this succeeds in reducing it by only 10 seconds, this ends up saving roughly 4 minutes of productive time per build.
Plugging in some assumptions from earlier (5 developers in a team, running 24 builds a day, $100 per dev hour), this 4 minute build time saving equates to 8 developer-hours a day, or $4000 a week.
A weekly return of $4000 on a one-off investment of $600 is favorable by any measure.
It should be pointed out that builds times over the average distraction threshold (say more than 5 minutes) are likely to always incur the full interruption cost. In this zone, small improvements to the build time won’t give the same return on investment.
Investment in improving build times might be attractive to product managers for another reason. Compared to other potential development tasks, it’s a low risk activity. Improving build times is generally less likely to cause unintended consequences to users compared to feature development or bug fixing work.
New feature work, while necessary to keep the product useful and competitive, can sometimes fail to pay for itself. Sometimes a new feature just falls flat with users, or even gets a negative reaction. In any case it will make the software more complex and costly to maintain.
Bug fixing work is lower-risk. But it still means changing the software’s source code, which can introduce new bugs.
Improving the build speed is relatively safer because there are many ways of doing it without changing the software’s source code. Perhaps the most obvious example of this is simply to buy faster machines. Users get the same software, it’s just built faster.
While it is true that optimizing build times can also introduce bugs, this is more true for aggressive optimizations. It’s less true for the kind of easy wins that can be gained if the build process has been a neglected area of investment.
Note that this is an example of “Via Negativa” – improvement by subtraction of the undesirable, as opposed to improvement by addition of the (assumed-to-be) desirable. Improving things by removing what’s bad is lower-risk because, as Nassim Taleb says: “we know a lot more about what is wrong than what is right.”
With that in mind, here are some of the known ways in which this model is a simplification. These also point the way towards how further research could be done to build a more accurate model.
The distraction threshold distribution was modelled on a survey response from 7 (probably Android) developers. Ideally a much larger sample would be used, with the distraction threshold inferred not from self-reports, but from (e.g.) analytics data from developer tools. Jetbrains could probably build a much more accurate picture from their IDE usage data.
The model assumes that each developer has a single, unchanging distraction threshold. In reality, a developer’s distraction threshold could vary over time for any number of reasons. For example, research suggests that we’re less productive (and more distractible?) near the end of the week, after lunchtime and during winter.
It’s assumed that developers attempt to resume their task as soon as the build finishes. In reality, it’s possible that they don’t return to development right away, but continue with the other task for a while. For example, they might finish reading the email they’re looking at first. Taking this into account would add to the interruption cost, making the current cost model a conservative estimate.
Note that there will be unknown limitations on top of these. If you notice any errors or important limitations I’ve left out, please drop me a comment below.
Slow build times can be a significant business expense when added up across a number of developers running multiple builds per day. If second-order effects are considered, such as the findings on interruption costs by Parnin and Rugaber (2010), the costs are greater still.
The upside of this nonlinearity is that it cuts both ways – there are significant cost savings to be had by even modest improvements to average build times. This is particularly the case when the current build time is hovering around the typical developer’s “distraction threshold” – the amount of time they wait before they’re likely to switch to another task. A straw poll indicated this could be somewhere between 5 seconds to 1 minute, however further research is needed to get a reliable estimate for this.
Thanks to Cristina and Dennis for reviewing a draft of this post.
“Almost” because it’s not as though developers’ brains shut off completely while waiting for a build to finish. Assuming they don’t switch to another task, it’s reasonable to assume they’ll continuing mentally working on the task, to the degree this is possible. However, I’ve left this out of consideration to simply things.
Note that an industry best practise for building quality software, called Test-Driven Development (TDD), prescribes rebuilding and retesting after every small change. This could be every few minutes. Therefore the cost of slow builds will be even higher for teams practising TDD. On the other hand, this should be mitigated by incremental builds, to the degree they’re supported.
QAs, Product Owners and other stakeholders wanting the latest app version might also start builds, or at least have to wait for them. Given that this is likely to be negligible compared with the number of builds run by developers (and to keep things simple) I’ve left this out of consideration.
Interestingly, this study from the University of Calgary found that self-interruptions (i.e. voluntary task switching) are even more disruptive than external interruptions, and also result in more errors.
Another avenue for further research: it seems likely that the expected build time is a factor in the developer’s decision of what kind of task to switch to while they’re waiting, if they do switch tasks. E.g. if a developer knows ahead of time that the build is likely to take 10 minutes, perhaps they are more likely to visit the kitchen for a snack versus an expected build time of 10 seconds. Another nonlinearity.
I was recently exposed to the idea of Non-Fungible Tokens (NFTs) through Twitter. I’ve struggled to wrap my head around them ever since. What does it mean that people are willing to pay millions to own a Tweet or JPEG? How does that sentence make any sense? And why do the concept of NFTs reliably evoke such polarised emotional reactions?
There was a recent article featured on Hacker News titled NFTs Are a Dangerous Trap. This was useful, but perhaps more for the comments it generated than for the article itself. The comments represented a wide range of perspectives, often passionately and intelligently argued for. Like the parable of the Blind men and the Elephant, it’s useful to listen to a number of voices when something new and mysterious is encountered. Each one might have a grasp of part of the truth. It’s also possible the new thing is in fact many things all at once.
The Blind men and the Elephant
After some reading and thinking, the conclusion I started to come to was that NFTs aren’t hard to grasp for technical reasons – at least not primarily.
Rather, NFTs are hard to grasp mostly because of the strangeness of the more fundamental ideas they rest upon – ideas like ownership and value. These are ideas that have, until recently, been easy not think too much about. NFTs have changed this by being so difficult to ignore.
Is Ownership Real?
To test this theory, let’s forget about NFTs for a second.
What does it mean to “own something”, really? How does the concept of ownership map onto the objective, material world, if at all? When you get down to it, does owning something just mean the ability to defend access to it through the threat of physical force? It could be you, your family, friends, hired guards or the state that wields this force. In any case, it’s hard to imagine ownership would count for much if it couldn’t ultimately be enforced physically.
It could be argued that direct, physical force isn’t needed to enforce ownership. Imagine children playing the board game Monopoly. If one child openly steals the other’s Monopoly money, the other children will soon refuse to play with him. Social exclusion is really painful; it appears to activate the same areas of the brain as physical pain . This makes a lot of sense from an evolutionary perspective. Exclusion from the social group means a lower chance of survival and of finding a mate. Direct physical force might be more obvious and easier to observe, but the effect is the same in both cases – pain, potential harm and death.
So on one hand ownership is “merely” an abstract concept, a social contract that we act out. It’s not something concrete you could point to in the external world. On the other hand, violating it leads to undeniably real and painful consequences. Given this, is it so unreasonable to act as if it were just as real as something you could feel, smell, taste or see (to quote Morpheus)?
Is Value Purely Subjective?
Another, equally mysterious topic rudely forced onto us by the arrival of NFTs is that of value. What makes something valuable? Is this value “real” or something purely subjective, reliant on collective whim, at risk of evaporating at any moment? If so, is this true of everything valued? Or are some things really, intrinsically valuable, independent of anyone’s opinion?
I suspect these questions are difficult because they can’t be easily reconciled with a scientific, materialist view of reality – the dominant view of our culture. They would sound absurd to someone from a pre-scientific time. “Of course those shining gold bars are really valuable. Just look at them.”
To borrow an example, imagine Elvis Presley’s guitar. It is functionally identical to thousands of other guitars like it, yet it is far more valuable. Is this value “real”? If it is, it can’t be due to the physical properties of the object itself. Its value isn’t intrinsic to the object, much like how paper money isn’t intrinsically valuable. You can’t do anything with it that you couldn’t do just as well with any other guitar (from a narrow, utilitarian perspective at least).
Is it valuable then merely because enough other people agree that it is? Then how does this agreement come about?
There seems to be some relationship between an item’s uniqueness or rarity and its value. Could this be the reliable, objective foundation on which broad agreement on value rests? There was only one Elvis Presley and that was the one (generic, mass-produced) guitar he played.
However, on further inspection this uniqueness/rarity theory doesn’t hold up. Why? Because the same could be said about any second-hand guitar – that it was once owned by a unique person. Only, say, John Doe from down the street instead of Elvis.
So being associated with a unique individual isn’t enough, there’s more going on here. An obvious omission is that Elvis is widely recognised as an important figure in popular culture, unlike John Doe (in Western culture at least – more on this soon). Why does this matter though? Why does this mean that more people would be willing to expend more hours of their labour to own his guitar? Perhaps because there are many more people with a positive emotional connection to Elvis and his music – his guitar has sentimental value for them. Seeing Elvis’ guitar hanging up in their house would make them smile and feel a sense of pride. It would also reliably communicate something to others and themselves about their identity.
You could go further still and make the case that the new guitar owner’s positive emotions are due to their increased social status. This increase in status translates to increased survival and reproductive opportunities, etc. Though perhaps this isn’t necessary; it’s enough to simply recognise that the expectation of positive emotions and a buffering of one’s sense of identity are part of what makes something valuable.
Are iPhones Intrinsically Valuable?
Earlier I said that Elvis is widely recognised as a important figure in Western culture at least. Conversely, it’s not hard to imagine a remote, isolated culture where Elvis is unknown and his guitar has no use other than as firewood. The same is true of objects we might think of as having “intrinsic” value – a BMW or an iPhone let’s say. Move these objects to somewhere without electricity, gas and internet access and they become basically worthless. So their “intrinsic value” is revealed to be nothing of the sort. The value of an iPhone is just as contingent and dependent on a particular context as Elvis’ guitar.
So is there anything with intrinsic and objective value, that is valuable for all people at all times – past, present and future? What about food? This is more of a stretch, but it’s not impossible to imagine a future where Elon Musk’s Neuralink reaches the point where it’s possible for us to transplant our consciousness into a machine. Food as we know it today would then be worthless.
Where Two Worlds Meet
It’s as if the question of value has no bottom. The more you think about it, the more questions are raised and the more mysterious it becomes. It’s indefinitely deep. NFTs have opened Pandora’s Box, and the mysterious nature of value is one of its contents.
To repeat, I suspect such questions are difficult to us because they can’t be easily reconciled with a purely scientific, materialist worldview. In order to objectively explain events in an independently-reproducible way, it’s necessary to eliminate the subjective human experience from consideration as much as possible. This is by design, and is necessary for science to function.
But try to measure the value of a guitar from an objective standpoint – it’s not possible. Any question of value must involve a subject. And while an item’s value isn’t objective, it cannot be easily dismissed as “merely subjective” and therefore irrelevant. The value of a stack of papers with pictures of the Queen might not be observable or measurable in a lab, but try throwing it into a crowd. Their resulting behaviour will most definitely be independently observable, measurable – and relevant to a living person.
A final leap of wild speculation. Could it be that NFTs have done nothing less than expose a place where the objective and subjective (or concrete and abstract) worlds meet, showing them to be part of the same world, not quite so distinct and irreconcilable after all? And all in a way that’s difficult to ignore.
It’s interesting how learning is sometimes like travel. At the end of the journey, you end up back where you started, only with a broadened perspective.
Trivial example: code comments. As a new programmer, I commented my code extensively. This was probably to compensate for my difficulties in reading code. It also allowed me to be lazy in the code I wrote. There isn’t as much of a need for clear and readable code when there are always plain English comments to fall back on.
Then I joined a team that was militantly against comments. Every comment was an admission of a failure to write readable code. I took this on-board and disciplined myself not to rely on comments. This forced me to spend more time on refactoring my code to make it as readable and “self-documenting” as possible. I still wouldn’t say I’m there yet, but I’m confident that my code is much more readable now than it was then.
However, in the last year or so, I’ve started adding comments again. Not too many, but here and there to add more context. It felt like blasphemy at first. Some of my comments are even somewhat redundant, but I’ve since learned that encoding the same information in redundant ways aids comprehension. For example, traffic lights use both colour and position to encode the same information.
Superficially it seems as though I’ve returned to where I started: I’m commenting code again. However, if I hadn’t disciplined myself to do without comments for a while, I wouldn’t have been forced to learn how to write more readable code. Abstaining from code comments is a forcing function for more readable code. At some point, you can re-integrate commenting as a useful tool rather than a crutch.
Interestingly, I’m confident that if I could travel back in time and tell past-me or any of my old teammates that, actually, comments can be a valuable tool to aid comprehension, I’m positive I would encounter strong resistance. This is just as it should be; I don’t think it can be any different. When you’re learning a new skill, it’s necessary to become somewhat closed off and follow a direction single-mindedly for a while. It’s part of the learning process. If you were too suggestible and deviated too easily, you would go around in small circles and never get anywhere.
You see the same pattern across the development community when a new technology or framework is introduced (I won’t name names. I’m sure you can think of some examples). In the early days, there is hype and zealotry. Then over time as more developers adopt the tech into production and real issues with it emerge, there is disillusionment and abandonment. Finally a more realistic, calibrated picture of the trade-offs of the technology emerge.
To show a specific selection of your posts in the sidebar, contrary to what some top-ranking search results will tell you, you don’t need to install yet another spammy WP plugin or write any HTML.
All you need is the standard Text widget.
Here’s how to add your own “selected posts” section.
Go to Dashboard >Appearance > Customize
Click Widgets > Main Sidebar (this part may vary depending on your theme. I’m using currently Astra)
Click Add a Widget then “Text – Arbitrary text”:
Enter “Selected Posts” for the widget title, or something similar
To add posts, click Insert/edit link then start typing the title of one of your posts. WordPress will auto-suggest matching posts in a drop-down list.
Note: you can customize the link titles in case the default ones include odd formatting characters.
When you’re done hit Publish.
The style of the final result should match your Recent Posts widget (if you have one) exactly:
I hope this is of some use. WordPress has a thriving ecosystem of plugins but many of them are spammy at best, predatory at worst, and (in this case) completely unnecessary.
Like energy, the total testing burden for a piece of software is constant. It can’t be added to nor reduced, only transformed.
However, the cost to your business (in terms of time, money and trust) of these different forms is variable.
Here’s an example.
Let’s say I’ve been tasked with adding a new screen to my company’s mobile app. Let’s also say I’m not in the mood for following TDD right now. I just want to see the new screen in the app as soon as possible.
So I go ahead and write the feature without any tests.
Since I’m still somewhat of a responsible developer, I realize I need to test the scenarios for the new screen before shipping it. It’s just that now I have to test them manually.
Luckily, everything works the first time. Nice. I just cleverly avoided spending an extra hour of tedious development time writing tests.
Or did I?
Notice that I didn’t actually dodge any testing responsibilities. I simply exchanged up-front, deterministic, slow-to-write, fast-to-run tests for after-the-fact, non-deterministic, free-to-write, slow-to-run tests. This seems like a good deal if I only need to run the tests once.
In reality, software always needs changing. Software that doesn’t change is obsolete, or soon will be, almost by definition. So changes to my new screen will inevitably be needed. And with every change, the tests need to be run again, and the prospect of doing them manually becomes less and less attractive. This shifts the cost-benefit equation in favour of the up-front, automated tests.
Let’s say I’m a less responsible yet even more confident developer (not a good combo). After coding the new screen, I just give it a quick smoke test. The screen shows up with the expected content. Nice. Ship it. I go and get a coffee, satisfied with my speed and skill as a developer, having cleverly dodged both automated testing and manual testing.
What about the burden of testing the different edge cases for the new feature? Did it evaporate? No, it was merely shifted onto the users – unbeknownst to them. The testing effort remains constant but the cost has changed. The immediate, measurable cost of developer time is exchanged for a delayed, less-measurable cost to your product and company’s reputation – i.e. lost trust with your users as they discover bugs that could have easily been found prior to shipping. It’s not hard to see how this will ultimately impact your company’s bottom line.
Now, am I making the case that you should work to predict all possible errors for all conceivable scenarios before shipping anything? Not at all. Due to the ultimately unpredictable complexity of a real production environment with real users, there is a class of errors which you can only find out about by shipping. Some of the testing burden forever rests on your users.
80 percent of the bugs are caught with 20 percent of the effort, and after that you get sharply diminishing returns.
Failing to ship fast enough deprives your users of valuable-but-imperfect software and deprives your team of valuable learning. This learning is absolutely crucial in adapting the product quickly enough for the business to survive.
The implication of the Law of Conservation of Testing Effort is not that all testing should be done before production. It’s that attempting to dodge the kinds of testing more appropriate to pre-production merely shifts the burden into post-prod, where it costs more.
The phrase “I don’t always test, but when I do, I test in production” seems to insinuate that you can only do one or the other: test before production or test in production. But that’s a false dichotomy. All responsible teams perform both kinds of tests. [Emphasis mine]
What if the file is large and/or you’re running on a low-memory device? This might be the case if you’re writing a firmware updater to run on an embedded device e.g. a Raspberry Pi Zero.
In this case, the above code could fail at runtime due to an out of memory error. This is because fs.readFileSync tries to read the entire contents of the file into memory, which might be more than you have available.
To avoid this, you can use streams. It turns out the Verify class extends <stream.Writable>, so you can pipe data from the input file to it like this:
It’s coming into summer here in New Zealand. Around this time every year, the strawberry farm near our house opens for the season, selling trays of fresh strawberries and real fruit ice cream.
On a hot Sunday afternoon a few weekends ago, my partner and I were driving home from some errands. Her brother had told us that the farm was open for the season and we thought we’d stop by for ice cream.
Cars lined the streets as we approached the carpark entrance. Not a good sign. I decided to try our luck anyway, which was a mistake. We were soon stuck behind a minivan in a traffic queue with no signs of moving. We decide that my partner should jump out and queue up for ice cream; there’s no reason both of us should be stuck in the car.
After a close call with the gentleman in the minivan attempting to back his way out of the queue, I eventually escaped the chaos of the carpark and found a park on the roadside some distance away.
The queues inside for ice cream weren’t any better than the queues outside. A sea of people spilled out of the corrugated iron farm shed that served as the shop.
The system in the farm shed is as follows: first you queue up for the cashier on the left hand side of the shed. Here you can buy trays of strawberries and tickets for ice cream. They only accept cash (this is unheard of in NZ; even food trucks carry eftpos terminals). To actually get your ice cream/s, you then need to join one of five other queues to redeem to your ticket.
That’s the theory anyway. In practise, the sheer number of people and lack of any barriers made it hard to tell where one queue started and another ended. The five ice cream queues were longer than the cashier queue and effectively cut it off, requiring people to push through the ice cream queues to reach it.
The ticket system was confusing. One family in front of me had been waiting in the ice cream queue for some time before realising they first needed to get a ticket from the other queue.
The air inside was tense; a feeling of desperation to get one’s ice cream and get out was palpable. The self-satisfied expressions of people carrying stacked trays of fresh strawberries or ice creams back from the front didn’t help matters (okay, I might be projecting a bit here).
Matters definitely weren’t helped by the slow queues. The one we were standing in hadn’t moved perceptibly in fifteen minutes. The ones next to us were inching forward slowly at least. This discrepancy was caused by each queue being served by a different, dedicated ice cream machine operator, and there was clearly some diversity in operator experience.
On top of this, the spectre of COVID-19 is still lingering, New Zealand just having been through our second wave of cases. I know this because people behind us joked about it being a “covid queue” more than once, before giving up and leaving. Social distancing wasn’t possible in this unruly mass of people.
We finally got our ice creams after half an hour of waiting. There were picnic tables outside to sit at, however there was no shade to speak of. Thankfully, the late-afternoon, early-summer sun wasn’t too bad.
Once we were seated… it all made sense. You just can’t get ice cream like this anywhere else; the strawberries were so fresh. The portion size was generous to say the least. Despite everything, it was worth the wait.
As I was enjoying my ice cream, I spotted a single, overflowing rubbish bin across the courtyard. There was no bathroom or anywhere to wash your hands in sight. Clearly, there was a lot the owners could do to improve things here. My partner pointed out that a single queue shared across the five ice cream machine operators, like airport luggage check-ins, would be both faster and fairer.
And yet, none of these problems – the lack of carparks, facilities, eftpos and a sane queuing system – seemed to matter one bit. They could barely keep up with the demand. It was the same last year, and I suspect every other year. I doubt they do any marketing other than word-of-mouth. Locals see that they’re open and tell their friends and family, the same way we found out. They have a product you just can’t get anywhere else, not without a long drive at least, and certainly not from any supermarket.
More than that, I suspect their no-frills approach actually works in their favour. And by this I don’t just mean that they get to avoid bank fees by only accepting cash, although that too. Their obvious popularity despite their obvious flaws is informative. The only logical explanation is that what they offer is more than good enough to compensate.
Say you had the choice between two surgeons of similar rank in the same department in some hospital. The first is highly refined in appearance; he wears silver-rimmed glasses, has a thin build, delicate hands, a measured speech, and elegant gestures. His hair is silver and well combed. He is the person you would put in a movie if you needed to impersonate a surgeon. His office prominently boasts an Ivy League diploma, both for his undergraduate and medical schools.
The second one looks like a butcher; he is overweight, with large hands, uncouth speech and an unkempt appearance. His shirt is dangling from the back. No known tailor in the East Coast of the U.S. is capable of making his shirt button at the neck. He speaks unapologetically with a strong New Yawk accent, as if he wasn’t aware of it. He even has a gold tooth showing when he opens his mouth. The absence of diploma on the wall hints at the lack of pride in his education: he perhaps went to some local college. In a movie, you would expect him to impersonate a retired bodyguard for a junior congressman, or a third-generation cook in a New Jersey cafeteria.
Now if I had to pick, I would overcome my suckerproneness and take the butcher any minute. Even more: I would seek the butcher as a third option if my choice was between two doctors who looked like doctors. Why? Simply the one who doesn’t look the part, conditional of having made a (sort of) successful career in his profession, had to have much to overcome in terms of perception.
How does any of this apply to building a startup? I think this is best summarised in an essay by Paul Graham, founder of Y Combinator. He points out that a classic mistake made by new founders is “playing house”. That is, investing too much on window dressings like a flashy website, high-production-value videos, attending conferences and trade shows, getting mugs and pens with your company logo printed on them, that kind of thing, at the expense of actually building something that people love.
I know I’ve repeatedly fallen prey to this. As a founder you feel you need at least some of this surface stuff to be taken seriously. If not by investors then at least by friends and family. But as PG points out, the best way to convince investors (the non-gullible ones at least) is to build something people want. Evidence of high user engagement or growth is very convincing.
What if you’re wanting to grow your startup organically instead? You might be able to convince users to give your product a try with a slick website and marketing materials. However, if the product is underwhelming they’re not going to stick around long or say anything good to their friends. It’s not a sustainable long-term strategy.
The strawberry farm is living proof that you can build a successful business on a great product and very little else.
If you found this article useful or interesting, drop me a comment or consider sharing it with people you know using the buttons below– Matt (@kiwiandroiddev)
Here’s an unavoidable fact: the software project you’re working on has some flaws that no one knows about. Not you, your users, nor anyone in your team. These could be anything from faulty assumptions in the UI to leaky abstractions in the architecture or an error-prone release process.
The good news is that there are some things you can do to force issues up to the surface. You might already be doing some of them.
Here are some examples:
Dig out an old or cheap phone and try to run your app on it. Any major performance bottlenecks will suddenly become obvious
Pretend you’re a new developer in the team1. Delete the project from your development machine, clone the source code and set it up from scratch. Gaps in the Readme file and outdated setup scripts will soon become obvious
Try to add support for a completely different database. Details of your current database that have leaked into your data layer abstractions will soon become obvious
Port a few screens from your front-end app to a different platform. For example, write a command-line interface that reuses the business and data layers untouched. “Platform-agnostic” parts of the architecture might soon be shown up as anything-but
Start releasing beta versions of your mobile app every week. The painful parts of your monthly release process will start to become less painful
Put your software into the hands of a real user without telling them how to use it. Then carefully watch how they actually use it
To borrow a term from interaction design, these are all examples of Forcing Functions. They raise hidden problems up to consciousness in such a way that they are difficult to ignore and therefore likely to be fixed.
Of course, the same is true of having an issue show up in production or during a live demo. The difference is that Forcing Functions are applied voluntarily. It’s less stressful, not to mention cheaper, to find out about problems on your own terms.
If you imagine your software as something evolving over time, strategically applying forcing functions is a way of accelerating this evolutionary process.
Are there any risks in doing this? A forcing function is like an intensive training environment. And while training is important, it’s not quite the real world (“The Map Is Not the Territory“). Forcing functions typically take one criteria for success and intensify it in order to force an adaptation. Since they focus on one criteria and ignore everything else, there’s a risk of investing too much on optimizing for that one thing at the expense of the bigger picture.
In other words, you don’t want to spend months getting your mobile game to run buttery-smooth on a 7 year old phone only to find out that no one finds the game fun and you’ve run out of money.
Forcing functions are a tool; knowing which of them to apply in your team and how often to apply them is a topic for another time.
However, to give a partial answer: I have a feeling that regular in-person tests with potential customers might be the ultimate forcing function. Why? Not only do they unearth a wealth of unexpected issues like nothing else, they also give you an idea of which other forcing functions you might want to apply. They’re like a “forcing function for forcing functions”.
Or to quote Paul Graham:
The only way to make something customers want is to get a prototype in front of them and refine it based on their reactions.
If you found this article useful, please drop me a comment or consider sharing it with your friends and colleagues using one of the buttons below– Matt (@kiwiandroiddev)
1 Thanks to Alix for this example. New starters have a way of unearthing problems not only in project setup, but in the architecture, product design and onboarding process at your company, to give a few examples.
Multicast DNS service discovery, aka. Zeroconf or Bonjour, is a useful means of making your node app (e.g. multiplayer game or IoT project) easily discoverable to clients on the same local network.
The node_mdns module worked out-of-the-box on my Mac. Unfortunately things weren’t as straightforward on a node-alpine docker container running on Raspberry Pi Zero, evidenced by this error at runtime:
Error: dns service error: unknown
at new Browser (/home/app/node_modules/mdns/lib/browser.js:86:10)
at Object.create [as createBrowser] (/home/app/node_modules/mdns/lib/browser.js:114:10)
Here’s how I managed to solve this. The following was pieced together from a number of sources (linked at the end).
I’ll assume you have a node app using node_mdns to publish your service, and a Dockerfile based on alpine-linux to build your app into an image for running on the Pi.
Firstly, you’ll need to have the alpine packages to run the avahi daemon, along with its development headers and compat support for bonjour. I.e. in your Dockerfile:
# Avahi is for DNS-SD broadcasting on the local network; DBUS is how Avahi communicates with clients
RUN apk add python make gcc libc-dev g++ linux-headers dbus avahi avahi-dev avahi-compat-libdns_sd
You’ll need to make sure the DBus and Avahi daemons are started in your container before starting your node app. Since you can only execute a single startup command from your Dockerfile, we’ll need to bundle the commands into a startup script, and run that. In your Dockerfile:
avahi-daemon --no-chroot &
node index.js # your app script here
Note: --no-chroot is added to avoid this runtime error:
alpine linux netlink.c: send(): Not supported
Build your Docker image (since this is for a Pi Zero in my case, I’m using DockerX to build for the ARMv6 architecture on my Mac. I recommend this over waiting days or weeks for it to build on the Pi Zero):
Now push then pull your Docker image onto your Raspberry Pi. If you don’t want to use a cloud-hosted registry, I’d recommend taking a look into setting up a local registry to push it directly to the Pi on your local network.
To run your docker image on the Pi, you’ll first need to disable the host OS’s avahi-daemon (if any) to prevent conflicts with the avahi-daemon that will be running inside your alpine-linux container. On Raspbian, you can disable avahi with:
# SSH into your Pi
sudo systemctl disable avahi-daemon
Then to run your docker image:
docker run -d --net=host localhost:5000/myapp
(localhost:5000 here refers to a local docker registry.) Using the host’s network (--net=host) seems to be necessary for mDNS advertisements to function. In theory you should be able to just map port 5353/udp from the container, but this didn’t work. (If you happen to know why, please drop a comment below).
That’s it. If all goes well you should be able to see your service advertised on the local network. E.g. from a Mac on the same network (the last line is our node app’s http service):