How To Make Technology That Changes Lives (For The Better)
As product designers, we should be trying to actively make things better.
Ethically, we all have a responsibility to make other people's lives no worse, at least. We should be driven to make products that make it easier for people to use technology or that make it easier for them to live their day to day lives.
Save time, money and resources to be put to better use in other areas of your business
Making a platform or a tool user centric is the way to create a positive experience, but it’s also the only way to build technology that can make someone’s life better. If you start with that goal, it's much harder to go astray. If you start with the intention of making the world better for people, or helping people or generally improving lives, your decisions will be directionally right.
There's a difference between building products to make end users happy and building products that make people happy
Our product is used by managers to work with customer service agents, and it makes their lives better, because it lets them manage more efficiently. They’re the end users, and as users they love the tool. If making our end users happy was our sole focus, that would be enough.
But we’ve always pushed for our tool to go that step further, and make the agents' lives better, by removing the subjectivity of judgment from their work. Daisee removes unconscious biases, creating workplace environments where agents don't have their managers unconsciously marking them down because of their race, gender or orientation.
Artificial Intelligence is a field progressing at an accelerated growth rate. We’re experiencing change that is rewriting the way we do things in real time. What it means for all of us is that we have a responsibility to understand the way AI is taught, the way it is applied, and the way it touches the lives of the people who interact with it, whether directly or indirectly. It’s not enough to simply build AI. It’s not enough to develop it. We need to understand, question and analyse what it does.
According to Gartner, the number of firms using AI in their operations and execution has grown by over 270% in the past 4 years. That data means one thing; AI is growing, AI isn’t going anywhere, and it is changing everything.
How can a company understand whether or not their technology is making a difference?
It's easy to sit out there with a goal and a vision and build something. But how do you know whether you’ve built a product that is improving someone’s life? When are you reaching that goal? The market size of artificial intelligence held an estimated value of $27.23 billion in 2019. But market size and financials don’t measure a company’s positive impact. You have to go beyond the numbers and understand the people.
There are a couple of different approaches. The first is, obviously, you ask the people who use your product. Do you like this product and does it make you happier? And once you have that data, to find the people whose lives are impacted by the product, who aren’t the user base, and ask the same questions. The Uber app might make individual users happy, because it reduces their travel time and makes transportation easier. But if you asked drivers whether they’re happy, you might get a very different answer.
One thing I think companies and the people who make technology in particular should think about is, would I want to use this? And would I want my family and friends to use this product? Would our lives be improved in the same way?
If you have some hideous social media advertising data harvesting system for example, and your answer to any of those questions is God knows, there's no way in hell I'm going to use that thing, then you shouldn't be building it.
Trying to imagine yourself using it, which obviously relies on you having some ethical judgment of your own, is a good cross check. But you can also try to search for the data that will show whether or not you’re making a difference.
With Daisee, we’re trying to make something fairer. We’re trying to make the measurement and performance effectiveness of support agents more equitable. And so what we need to do is measure how people were being graded before and after our product was put in place. We collect the stats, analyse them, and hopefully we would be able to say that before we put a system in place, the average score for men from a particular manager was 73, and for women it was 62 - and after, it's 68 each. That data means that the measurement was biased previously, that bias has been removed.
We're trying to make sure that whatever solution we put in place, doesn't introduce new problems. We constantly push ourselves to take a step back and actually assess, what have we done? Does it solve a problem? And what were the unintended consequences?
I think that keeping our product focused, where you're constantly thinking about: What problem am I trying to solve? Did I solve it?
We try to find the friction points that are making the users’ and people’s lives more difficult. And we try to find solutions that solve those friction points and don't cause other unintended issues along the way.