Like energy, the total testing burden for a piece of software is constant. It can’t be added to nor reduced, only transformed.
However, the cost to your business (in terms of time, money and trust) of these different forms is variable.
Here’s an example.
Let’s say I’ve been tasked with adding a new screen to my company’s mobile app. Let’s also say I’m not in the mood for following TDD right now. I just want to see the new screen in the app as soon as possible.
So I go ahead and write the feature without any tests.
Since I’m still somewhat of a responsible developer, I need to test the scenarios for the new screen before shipping it. It’s just that now I have to test them manually.
Luckily I mean uh, because I’m an awesome developer, everything works the first time. Nice.
I just cleverly avoided spending an extra hour of tedious development time writing tests.
Or did I?
Notice that I didn’t actually dodge any testing responsibilities. I simply exchanged up-front, deterministic, slow-to-write, fast-to-run tests for after-the-fact, non-deterministic, free-to-write, slow-to-run tests. This seems like a good deal if I only need to run the tests once. And it would be, if this were ever true.
In reality, software that never changes is almost immediately obsolete. Changes to this new screen will inevitably be needed. And with every change, the tests need to be run again. This shifts the cost-benefit equation in favour of the up-front, automated tests. The prospect of running through all of the scenarios manually each time is becoming less and less attractive and clever-sounding.
Now, on to a more extreme example.
Let’s say I’m a less responsible yet even more confident developer (not a good combo). After coding the new screen, I run the app and give it a quick smoke test. The screen shows up with the expected content. Nice. Ship it. I go and get a coffee, smugly satisfied with my speed and skill as a developer, having cleverly dodged both automated testing and manual testing.
What about the burden of testing the different edge cases for the new feature? Did it evaporate? No, it was merely shifted onto the users – unbeknownst to them. The testing effort remains constant but the cost is different. The immediate, measurable cost of developer time is exchanged for cost to your product and company’s reputation – i.e. lost trust with your users as they discover bugs that could have easily been found prior to shipping. In not hard to see how this will ultimately impact your company’s bottom line, too.
Now, am I making the case that you should work to predict all possible errors for all conceivable scenarios before shipping anything? Not at all. Due to the ultimately unpredictable complexity of a real production environment with real users, there is a class of errors which you can only find out about by shipping. Some of the testing burden forever rests on your users.
Also, too much time spent trying to predict errors comes with its own costs. As Charity Majors points out in her excellent article I test in prod:
80 percent of the bugs are caught with 20 percent of the effort, and after that you get sharply diminishing returns.
Failing to ship fast enough deprives your users of valuable-but-imperfect software. And your team is deprived of valuable learning from real-world testing, absolutely crucial in adapting the product quickly enough for the business to survive.
The implication of the Law of Conservation of Testing Effort is not that all testing should be done before production. It’s that attempting to dodge the kinds of testing more appropriate to pre-production merely shifts the burden into post-prod, where it costs more.
The phrase “I don’t always test, but when I do, I test in production” seems to insinuate that you can only do one or the other: test before production or test in production. But that’s a false dichotomy. All responsible teams perform both kinds of tests. [Emphasis mine]
What if the file is large and/or you’re running on a low-memory device? This might be the case if you’re writing a firmware updater to run on an embedded device e.g. a Raspberry Pi Zero.
In this case, the above code could fail at runtime due to an out of memory error. This is because fs.readFileSync tries to read the entire contents of the file into memory, which might be more than you have available.
To avoid this, you can use streams. It turns out the Verify class extends <stream.Writable>, so you can pipe data from the input file to it like this:
It’s coming into summer here in New Zealand. Around this time every year, the strawberry farm near our house opens for the season, selling trays of fresh strawberries and real fruit ice cream.
On a hot Sunday afternoon a few weekends ago, my partner and I were driving home from some errands. Her brother had told us that the farm was open for the season and we thought we’d stop by for ice cream.
Cars lined the streets as we approached the carpark entrance. Not a good sign. I decided to try our luck anyway, which was a mistake. We were soon stuck behind a minivan in a traffic queue with no signs of moving. We decide that my partner should jump out and queue up for ice cream; there’s no reason both of us should be stuck in the car.
After a close call with the gentleman in the minivan attempting to back his way out of the queue, I eventually escaped the chaos of the carpark and found a park on the roadside some distance away.
The queues inside for ice cream weren’t any better than the queues outside. A sea of people spilled out of the corrugated iron farm shed that served as the shop.
The system in the farm shed is as follows: first you queue up for the cashier on the left hand side of the shed. Here you can buy trays of strawberries and tickets for ice cream. They only accept cash (this is unheard of in NZ; even food trucks carry eftpos terminals). To actually get your ice cream/s, you then need to join one of five other queues to redeem to your ticket.
That’s the theory anyway. In practise, the sheer number of people and lack of any barriers made it hard to tell where one queue started and another ended. The five ice cream queues were longer than the cashier queue and effectively cut it off, requiring people to push through the ice cream queues to reach it.
The ticket system was confusing. One family in front of me had been waiting in the ice cream queue for some time before realising they first needed to get a ticket from the other queue.
The air inside was tense; a feeling of desperation to get one’s ice cream and get out was palpable. The self-satisfied expressions of people carrying stacked trays of fresh strawberries or ice creams back from the front didn’t help matters (okay, I might be projecting a bit here).
Matters definitely weren’t helped by the slow queues. The one we were standing in hadn’t moved perceptibly in fifteen minutes. The ones next to us were inching forward slowly at least. This discrepancy was caused by each queue being served by a different, dedicated ice cream machine operator, and there was clearly some diversity in operator experience.
On top of this, the spectre of COVID-19 is still lingering, New Zealand just having been through our second wave of cases. I know this because people behind us joked about it being a “covid queue” more than once, before giving up and leaving. Social distancing wasn’t possible in this unruly mass of people.
We finally got our ice creams after half an hour of waiting. There were picnic tables outside to sit at, however there was no shade to speak of. Thankfully, the late-afternoon, early-summer sun wasn’t too bad.
Once we were seated… it all made sense. You just can’t get ice cream like this anywhere else; the strawberries were so fresh. The portion size was generous to say the least. Despite everything, it was worth the wait.
As I was enjoying my ice cream, I spotted a single, overflowing rubbish bin across the courtyard. There was no bathroom or anywhere to wash your hands in sight. Clearly, there was a lot the owners could do to improve things here. My partner pointed out that a single queue shared across the five ice cream machine operators, like airport luggage check-ins, would be both faster and fairer.
And yet, none of these problems – the lack of carparks, facilities, eftpos and a sane queuing system – seemed to matter one bit. They could barely keep up with the demand. It was the same last year, and I suspect every other year. I doubt they do any marketing other than word-of-mouth. Locals see that they’re open and tell their friends and family, the same way we found out. They have a product you just can’t get anywhere else, not without a long drive at least, and certainly not from any supermarket.
More than that, I suspect their no-frills approach actually works in their favour. And by this I don’t just mean that they get to avoid bank fees by only accepting cash, although that too. Their obvious popularity despite their obvious flaws is informative. The only logical explanation is that what they offer is more than good enough to compensate.
Say you had the choice between two surgeons of similar rank in the same department in some hospital. The first is highly refined in appearance; he wears silver-rimmed glasses, has a thin build, delicate hands, a measured speech, and elegant gestures. His hair is silver and well combed. He is the person you would put in a movie if you needed to impersonate a surgeon. His office prominently boasts an Ivy League diploma, both for his undergraduate and medical schools.
The second one looks like a butcher; he is overweight, with large hands, uncouth speech and an unkempt appearance. His shirt is dangling from the back. No known tailor in the East Coast of the U.S. is capable of making his shirt button at the neck. He speaks unapologetically with a strong New Yawk accent, as if he wasn’t aware of it. He even has a gold tooth showing when he opens his mouth. The absence of diploma on the wall hints at the lack of pride in his education: he perhaps went to some local college. In a movie, you would expect him to impersonate a retired bodyguard for a junior congressman, or a third-generation cook in a New Jersey cafeteria.
Now if I had to pick, I would overcome my suckerproneness and take the butcher any minute. Even more: I would seek the butcher as a third option if my choice was between two doctors who looked like doctors. Why? Simply the one who doesn’t look the part, conditional of having made a (sort of) successful career in his profession, had to have much to overcome in terms of perception.
How does any of this apply to building a startup? I think this is best summarised in an essay by Paul Graham, founder of Y Combinator. He points out that a classic mistake made by new founders is “playing house”. That is, investing too much on window dressings like a flashy website, high-production-value videos, attending conferences and trade shows, getting mugs and pens with your company logo printed on them, that kind of thing, at the expense of actually building something that people love.
I know I’ve repeatedly fallen prey to this. As a founder you feel you need at least some of this surface stuff to be taken seriously. If not by investors then at least by friends and family. But as PG points out, the best way to convince investors (the non-gullible ones at least) is to build something people want. Evidence of high user engagement or growth is very convincing.
What if you’re wanting to grow your startup organically instead? You might be able to convince users to give your product a try with a slick website and marketing materials. However, if the product is underwhelming they’re not going to stick around long or say anything good to their friends. It’s not a sustainable long-term strategy.
The strawberry farm is living proof that you can build a successful business on a great product and very little else.
If you found this article useful or interesting, drop me a comment or consider sharing it with people you know using the buttons below– Matt (@kiwiandroiddev)
Here’s an unavoidable fact: the software project you’re working on has some flaws that no one knows about. Not you, your users, nor anyone in your team. These could be anything from faulty assumptions in the UI to leaky abstractions in the architecture or an error-prone release process.
The good news is that there are some things you can do to force issues up to the surface. You might already be doing some of them.
Here are some examples:
Dig out an old or cheap phone and try to run your app on it. Any major performance bottlenecks will suddenly become obvious
Pretend you’re a new developer in the team1. Delete the project from your development machine, clone the source code and set it up from scratch. Gaps in the Readme file and outdated setup scripts will soon become obvious
Try to add support for a completely different database. Details of your current database that have leaked into your data layer abstractions will soon become obvious
Port a few screens from your front-end app to a different platform. For example, write a command-line interface that reuses the business and data layers untouched. “Platform-agnostic” parts of the architecture might soon be shown up as anything-but
Start releasing beta versions of your mobile app every week. The painful parts of your monthly release process will start to become less painful
Put your software into the hands of a real user without telling them how to use it. Then carefully watch how they actually use it
To borrow a term from interaction design, these are all examples of Forcing Functions. They raise hidden problems up to consciousness in such a way that they are difficult to ignore and therefore likely to be fixed.
Of course, the same is true of having an issue show up in production or during a live demo. The difference is that Forcing Functions are applied voluntarily. It’s less stressful, not to mention cheaper, to find out about problems on your own terms.
If you imagine your software as something evolving over time, strategically applying forcing functions is a way of accelerating this evolutionary process.
Are there any risks in doing this? A forcing function is like an intensive training environment. And while training is important, it’s not quite the real world (“The Map Is Not the Territory“). Forcing functions typically take one criteria for success and intensify it in order to force an adaptation. Since they focus on one criteria and ignore everything else, there’s a risk of investing too much on optimizing for that one thing at the expense of the bigger picture.
In other words, you don’t want to spend months getting your mobile game to run buttery-smooth on a 7 year old phone only to find out that no one finds the game fun and you’ve run out of money.
Forcing functions are a tool; knowing which of them to apply in your team and how often to apply them is a topic for another time.
However, to give a partial answer: I have a feeling that regular in-person tests with potential customers might be the ultimate forcing function. Why? Not only do they unearth a wealth of unexpected issues like nothing else, they also give you an idea of which other forcing functions you might want to apply. They’re like a “forcing function for forcing functions”.
Or to quote Paul Graham:
The only way to make something customers want is to get a prototype in front of them and refine it based on their reactions.
If you found this article useful, please drop me a comment or consider sharing it with your friends and colleagues using one of the buttons below– Matt (@kiwiandroiddev)
1 Thanks to Alix for this example. New starters have a way of unearthing problems not only in project setup, but in the architecture, product design and onboarding process at your company, to give a few examples.
Multicast DNS service discovery, aka. Zeroconf or Bonjour, is a useful means of making your node app (e.g. multiplayer game or IoT project) easily discoverable to clients on the same local network.
The node_mdns module worked out-of-the-box on my Mac. Unfortunately things weren’t as straightforward on a node-alpine docker container running on Raspberry Pi Zero, evidenced by this error at runtime:
Error: dns service error: unknown
at new Browser (/home/app/node_modules/mdns/lib/browser.js:86:10)
at Object.create [as createBrowser] (/home/app/node_modules/mdns/lib/browser.js:114:10)
Here’s how I managed to solve this. The following was pieced together from a number of sources (linked at the end).
I’ll assume you have a node app using node_mdns to publish your service, and a Dockerfile based on alpine-linux to build your app into an image for running on the Pi.
Firstly, you’ll need to have the alpine packages to run the avahi daemon, along with its development headers and compat support for bonjour. I.e. in your Dockerfile:
# Avahi is for DNS-SD broadcasting on the local network; DBUS is how Avahi communicates with clients
RUN apk add python make gcc libc-dev g++ linux-headers dbus avahi avahi-dev avahi-compat-libdns_sd
You’ll need to make sure the DBus and Avahi daemons are started in your container before starting your node app. Since you can only execute a single startup command from your Dockerfile, we’ll need to bundle the commands into a startup script, and run that. In your Dockerfile:
avahi-daemon --no-chroot &
node index.js # your app script here
Note: --no-chroot is added to avoid this runtime error:
alpine linux netlink.c: send(): Not supported
Build your Docker image (since this is for a Pi Zero in my case, I’m using DockerX to build for the ARMv6 architecture on my Mac. I recommend this over waiting days or weeks for it to build on the Pi Zero):
Now push then pull your Docker image onto your Raspberry Pi. If you don’t want to use a cloud-hosted registry, I’d recommend taking a look into setting up a local registry to push it directly to the Pi on your local network.
To run your docker image on the Pi, you’ll first need to disable the host OS’s avahi-daemon (if any) to prevent conflicts with the avahi-daemon that will be running inside your alpine-linux container. On Raspbian, you can disable avahi with:
# SSH into your Pi
sudo systemctl disable avahi-daemon
Then to run your docker image:
docker run -d --net=host localhost:5000/myapp
(localhost:5000 here refers to a local docker registry.) Using the host’s network (--net=host) seems to be necessary for mDNS advertisements to function. In theory you should be able to just map port 5353/udp from the container, but this didn’t work. (If you happen to know why, please drop a comment below).
That’s it. If all goes well you should be able to see your service advertised on the local network. E.g. from a Mac on the same network (the last line is our node app’s http service):
Automated tests improve (minimally) the quality of your code by revealing some of its defects. If one of your tests fails, in theory this points to a defect in your code. You make a fix, the test passes, and the quality of your software has improved by some small amount as a result.
Another way to think about this is that the tests apply evolutionary selection pressure to your code. Your software needs to continually adapt to the harsh and changing conditions imposed by your test suite. Versions of the code that don’t pass the selection criteria don’t survive (read: make it into production).
There’s something missing from this picture though. So far, the selection pressure only applies in one direction: from the tests onto the production code. What about the tests themselves? Chances are, they have defects of their own, just like any other code. Not to mention the possibility of big gaps in the business requirements they cover. What, if anything, keeps the tests up-to-scratch?
If tests are actually an important tool for maintaining code quality, then this is an important question to get right. Low-quality tests can’t be expected to bring about higher quality software. In order to extract the most value out of automated tests, we need a way to keep them up to a high standard.
What could provide this corrective feedback? You could write tests for your original tests. But this quickly leads to an infinite regress. Now you need tests for those tests, and tests for those tests, and so on, for all eternity.
What if the production code itself could somehow apply selection pressure back onto the tests? What if you could set up an adversarial process, where the tests force the production code to improve and the production code, in turn, forces the tests to improve? This avoids the infinite regress problem.
It turns out this kind of thing is built into the TDD process. Here are the 3 laws of TDD:
You must write a failing test before you write any production code.
You must not write more of a test than is sufficient to fail, or fail to compile.
You must not write more production code than is sufficient to make the currently failing test pass (emphasis mine).
It’s following rule 3 that applies selection pressure back onto the tests. By only writing the bare minimum code in order to make a test pass, you’re forced to write another test to show that your code is actually half-baked. You then write just enough production code in order to address the newly failing test, and so on. It’s a positive feedback loop.
You end up jumping between two roles that are pitted against each other: the laziest developer on the planet and a test engineer who is constantly trying to show the developer up with failing tests.
Another benefit to being lazy is that it produces lean code. At some point, there are no more tests to write; you’ve implemented the complete specification as it’s currently understood. When this happens, you will often find that you’ve written far less code than expected. This is a win because all else being equal, less code is easier to understand.
Reading about this is one thing, but it needs to be tried out to really grasp its benefits. It turns out there is an exercise/game called Evil Coder that was created to practise this part of TDD. You pair up with another developer, with one person writing tests and the other taking the evil coder role:
Evil mute A/B pairing: Pairs are not allowed to talk. One person writes tests. The other person is a “lazy evil” programmer who writes the minimal code to pass the tests (but the code doesn’t need to actually implement the specification).
“We have to balance the customer’s needs with the business needs”.
How many times have you heard this while working in a software development team?
I’ve worked as a mobile developer at a number of large companies. In enterprise environments like these, typically the mobile app is “the storefront of the business”, and brings together a number of features paid for by other departments.
Often the initial requirements from the other department will come with a suggestion to make their feature more prominent in the app. For example, “add it to the top of the dashboard”, “just add a new tab for it” or “send a push notification to our users about it”.
This is understandable. The job of the people from the other department is firstly to improve the area of the business they are responsible for. Their job is not to work out how to nicely integrate their feature into the app so it plays nicely with every other feature. That’s the app team’s job.
When members of the app team point out that adding a new top-level tab or push-notification for every new feature requested by every department isn’t a sustainable long-term strategy, and will lead to a poor user experience, the protest that often comes back is something like:
Well, we have to remember to balance the customer’s needs with the business needs.
I was never comfortable with this statement. It’s taken me a while to think through exactly why this is. What I eventually concluded is that while it seems reasonable on the surface, buried in it is a wrong assumption.
It’s not that you should always prioritize the customer’s needs over business needs, or vice versa. Rather, the assumption underlying the statement – that these two things are at odds – is wrong. It’s a false dichotomy.
To believe that “balancing the user’s needs with business needs” makes sense, you need to be engaged in short-term thinking of one kind or another.
If you want your business to survive in the long term, there can be no distinction between the interests of your customer and those of your business.
Your business exists to serve a customer, in a sustainable way. In the final analysis (assuming a free market where your customers can leave), business needs and customer needs must be aligned. Promoting one at the expense of the other actually harms both.
Likewise, building a system that helps your customers at the expense of the business is actually harming both your customers and the business.
How does this second point make sense? I.e. how is that helping your customers at the expense of the business actually harms them?
Here’s how: presumably, your customers would rather your business continues to exist than not. For example, bribing customers with giveaways and subsidized prices isn’t sustainable. If you “spend 1 dollar to make 80 cents”, you will eventually go out of business.
When this happens, you will (at the very least) inconvenience your customers, leaving them bereft or forced against their wishes to switch to a competitor. Or if you offer something unique, you deprive them of that unique offering altogether.
Is it idealistic or wishful thinking to see the success of your customer and business as inextricably linked? Jeff Bezos, the CEO of Amazon, doesn’t seem to think so. The top 3 of his 4 pillars of Amazon’s success are:
Eagerness to Invent to Please the Customer
So next time you hear that the “needs of customer need to be balanced with the needs of the business” remember that to successful businesses, there is really no distinction.
Thanks to Xiao and Arun for their feedback and suggestions. Liked this article? Please consider sharing it with your friends and colleagues with one of the buttons below.
Anti-Fragile by Nassim Nicholas Taleb is a goldmine of practical ideas for software developers, despite it not being a software development book.
Redundancy is one example of such an idea that is explored. Taleb explains how having some redundancy reduces fragility, and means we don’t need to predict the future so well. Think of food stored in your basement, or cash under your mattress.
Taleb notes how nature’s designs frequently employ redundancy (“Nature likes to overinsure itself”):
“Layers of redundancy are the central risk management property of natural systems. We humans have two kidneys […] extra spare parts, and extra capacity in many, many things (say, lungs, neural system, arterial apparatus), while human design tends to be spare and inversely redundant, so to speak – we have a historical track record of engaging in debt, which is the opposite of redundancy”
Software source code is a good example of human design that tends to be “spare” (having no excess fat) and “inversely redundant”. Redundancy in code is traditionally avoided at all costs. In fact, one of the first principles that junior developers are often taught is the DRY principle – Don’t Repeat Yourself. As far as DRY is concerned, redundant code is a blight that should be eliminated wherever it shows up.
There are good reasons for the DRY principle. Duplicate code adds noise to the project, making it harder to understand without adding any obvious value. It makes the project harder to modify because the same code must be maintained separately at each place it is duplicated. Each of these locations is also another opportunity to introduce bugs. Duplicate code feels like waste.
However, as Taleb states:
“Redundancy is ambiguous because it seems like a waste if nothing unusual happens. Except that something unusual happens – usually.” [emphasis added]
What are these “unusual things that usually happen” in software development? And how could duplicate code possibly help protect us against them?
The Wrong Abstraction
Firstly, remember that duplication is eliminated by introducing abstractions, such as a function or class. The problem with abstractions is that it is difficult to know ahead of time whether a chosen abstraction is actually a good fit for your project. And the cost of getting this wrong is high. Poorly-chosen abstractions add friction to making the kinds of changes that are actually needed for the project, while still exacting an ongoing cost in terms of complexity. There’s also the risk that by the time poor abstractions have been recognised as such, they have already spread throughout the project. Rooting them out at this point will likewise impact code all throughout the project, potentially with unintended consequences.
The “unusual things that usually happen” in software development are unexpected, unpredictable (and unavoidable) changes in business requirements. These have the annoying effect of revealing the shortcomings of your abstractions, abstractions that you perhaps added while faithfully following the DRY principle.
Too-eager abstraction and a lack of redundancy mirrors the problems of centralisation, another idea explored in Anti-Fragile. Centralisation, while efficient in the short-term (read: less code), makes systems fragile. When blow-ups happen, they can take down (or at least damage) the entire system. NNT outlines in Anti-Fragile how such fragility and lack of redundancy was the cause of the banking system collapse of 2008.
Redundancy in the form of duplicated code, on the other hand, makes code more robust. It does this by avoiding the worse evil of introducing the wrong abstraction. In this way, it limits the impact of unexpected changes in business requirements. As Sandi Metz states: “Duplication is far cheaper than the wrong abstraction”
The Rule of Three
As it turns out, there is another software development principle (or rule of thumb) which does recognise the risks of poor abstractions, and seeks to mitigate them through some redundancy. It’s called the “Rule of Three”. It states that you should wait until a piece of code appears three times before abstracting it out. (Note that this appears to contradict the DRY principle). This minimises the chances that the abstraction is premature, and increases the chances that it addresses a real, recurring feature of the problem domain that is worth the cost of abstraction.
Introducing an abstraction is in some sense a prediction of the future. Abstractions make a certain class of future changes easier, at the cost of some extra complexity and fragility. They are worth this cost if and only if the types of changes they make easier actually turn out to be reasonably common. Following The Rule of Three means deliberately holding off on making a prediction until more evidence has come in. The assumption built into the Rule of Three is that past changes are the best predictor of future changes.
Back to Nature
Now to return to Taleb’s observation of widespread redundancy in nature’s designs. An interesting implication of this is that despite all of the apparent “waste” involved, evolutionary processes have nonetheless converged onto it as the best strategy for dealing with unpredictability – a permanent feature of the real world (or at least, a better strategy than no redundancy – having one kidney, for instance).
At a high level, our software projects and teams are similar in the sense that they exist in a challenging, competitive environment punctuated by unpredictable changes. If meaningful parallels can be made between complex systems, it’s worth considering the possibility that despite the apparent “waste” involved, some redundancy is likewise the best strategy for dealing with the unpredictability in our environment too.
This is all to say: go forth and fearlessly copy-paste more code 🙂
Last month I was fortunate enough to spend two weeks traveling around southern China including Hainan, Guangzhou, Macau and Hong Kong. It was an awesome trip; I would particularly recommend stopping by Hong Kong for a few days to check it out if you get the chance. It’s an amazing, vibrant city.
At some point during the trip I started noting down the things I was learning (about travel in general, and travel around cities in particular) into Evernote. Over time the list kept growing. What follows is an edited version of the original list, compiled into a top 10 (in typical web article fashion…)
1. Get the phone number of contacts in foreign country
If you’re meeting friends at the destination airport, make sure you have their mobile number. Just having them on Google Hangouts, WeChat, Facebook messenger or <insert online service here> won’t cut it as you can’t rely on WiFi access at airports. In Shanghai for example, you’ll still need a local mobile number to access the “Free” airport wifi.
Old fashioned and low-tech is sometimes best.
2. Double check that airports of connecting flights match
Cities can have more than one airport, and they may not be close together at all. As a New Zealander, this was surprising to learn…
3. Bring plenty of cash in the local currency
Unlike credit card and bank cards, cash is guaranteed to be accepted everywhere and is a lifesaver in emergencies.
Even if you’re going to a first-world country, don’t assume your card will be widely accepted, even at popular tourist attractions. For instance, you’ll need cash to buy a ticket for the Victoria Peak Tram in Hong Kong.
Another tip: divide your cash up and distribute it amongst your bags. That way if one goes missing, you still have backups. I had three stashes: one in my checked-in luggage, one in my backpack and a small amount in my wallet.
Again, low-tech = good.
4. Pack the night before
It’s easy to be unrealistic about how easy and fast it will be to “throw everything into your bag in the morning”. If checking out of your hotel room in the morning, do all possible packing the night before.
5. Invest in good walking shoes
When you’re out exploring all day every day, decent shoes will really pay dividends. Conversely, bad shoes and feet that are killing you each day can put a damper on your travel experience!
6. Sort out mobile data for your smartphone
Having internet access on your smartphone is absolutely essential when travelling, if only for Maps/GPS, Google Translate and being able to research other places to see while you’re already out.
With that in mind, set up global roaming with your mobile provider before you leave, or check if SIM cards are freely available at destination country. Some countries require you to be a local resident and/or have identification to get a SIM card (Hong Kong isn’t like this; China is).
Remember to pack the SIM card removal tool for your phone, if applicable.
If going to a country with restricted internet access, you may want to sort out VPN access beforehand so you can still access the online services you’re used to (Facebook, YouTube, etc). Record multiple fallback IP addresses for your VPN provider as it’s hard to know which will be blocked.
7. Always have snacks and water with you
Bring water and lots of snack foods such as energy bars and nuts in your day pack to keep up your energy levels throughout the day. You never know where you might end up while exploring; it might be a long time between proper meals.
8. Find out the off-peak hours of the tourist attractions you want to visit…
…and go then to avoid crowds. Crowds are pretty much guaranteed no matter the time of day at remotely popular attractions but you can avoid the worst of it with careful planning. Again, this was a bit of surprise to someone from a country as small as N.Z. where things are pretty much guaranteed to be quiet on weekdays and mornings.
9. Get a Metro map
This is a must if you’re checking out any city with a decent metro (e.g. Guangzhou, Shanghai and Hong Kong) due to the sheer amount of time you’ll spend using it. A paper map is better (no worries about dead batteries) or download a PDF online onto your phone or tablet.
10. Invest in or borrow a decent camera
As good as phone cameras are these days, there’s still no substitute for a standalone camera.
And finally (bonus tip 11), if I’ve learned one thing about travel so far it’s this: the big-name tourist attractions at any given destination can be pretty overrated. They’re often geared towards foreigners so much so that they shield you from the actual local culture. Some of the most enjoyable experiences I’ve had travelling have been while wandering around exploring, taking it all in and spontaneously discovering things. So don’t just tick all the boxes, get out there and experience the authentic whatever-place-it-is.
Both the Developer Console and Google Analytics can display your app’s active users –the number of users that opened your app at least once on each day. Knowing the number of active users is a good start to getting an idea of user engagement, but the problem with looking at it in isolation is that it doesn’t give you any idea of how many of your users have your app installed and don’t open it at all each day.
What’s needed is a new metric with more context – the number of active daily users as a percentage of total users. This is a more accurate indicator of the actual value your app is offering your users, and can be used to validate that specific changes to your app are actually making it more useful or enjoyable (in Lean Startup terms, it is more a core metric and less of a vanity metric).
How to measure daily active users as a percentage for your Android app
You will need:
an Android app with Google Analytics and a reasonable amount of analytics data
Excel, LibreOffice Calc or an equivalent spreadsheet program for plotting graphs
Note: the sample screenshots I’ve included here use data from my recently released RadioDrive app.
Adjust the date range (drop-down box in the top-right corner) if necessary, then click Export > CSV
The next step is to import and combine both datasets in Excel. Once you have copied both sets of data into the same spreadsheet, you’ll want to sort the Developer Console data by increasing date so it matches the Analytics data. To do this in Calc, box-select all rows for the date and current_user_install columns, then select Data -> Sort -> Sort by ascending date.
Move active user data so the dates correspond, if necessary…
Make a new column for percentage (Formula: =(C6/B6)*100). You can delete the Day Index column now as it’s redundant.
Plot a line graph (date on X axis, percent on Y axis)
So far so good, we have a graph showing the percentage of active users each day.
But there’s a problem. Say you release an update for your app that is a total flop. Users start to uninstall your app in droves, except for a small segment of your dedicated fans. In this case, the percentage of active users may actually go up, as your botched update eliminates all but your most loyal users.
If you keep an eye on your other statistics such as daily uninstalls and number of active users (as well as monitoring actual user feedback), you would (hopefully) pick up this kind of scenario. However it’d be nice to be able to see this situation occurring in the same graph.
To do this, you can simply plot current user installs or number of active users on the same axes. That way, you’ll know something is up if either of them start trending downward.
Here I’ve plotted current user installs on a secondary Y axis:
The final graph (after adjusting the percentage scale to prevent overlap):
(In case you’re wondering, the lack of active user data until the 8th Dec is due to Google Analytics not being in the app until then!)
Extra credit: add a 3 or 5 day moving average trend line to % Active Users to smooth out fluctuations (having a larger sample size helps with this also).
What core metrics do you measure for your app and what tools do you use to measure them?