Introduction
I’ve spent the past few years sitting in the space between technology and responsibility. It’s an awkward but important place to be, especially right now. On one side, there’s the speed and excitement of building things that reach millions. On the other, there’s the quiet work of making sure what we build doesn’t come at the expense of people, or the planet.
At ustwo, where I lead sustainability and corporate responsibility, we’ve long believed that good design can make the world a bit better. That belief is what led me to start questioning what “responsible” really means in the age of AI and to wonder how this technology fits within a B Corp mindset. How do we measure the invisible costs of training new models? What do we owe to the communities and ecosystems affected by our digital choices? And how do we balance innovation with care?
The truth is, AI isn’t just another tool. It’s a mirror. It reflects the values, blind spots, and assumptions of the people who build and use it. We can’t expect responsibility to come from regulation alone or from big tech publishing glossy reports. It all starts with us: the designers, developers, strategists, and leaders who bring these systems to life. We’re the ones in the room when decisions get made, shaping how AI shows up in the world.
The most important thing I’ve learned at ustwo in the past 9 years is that staying human in this fast-changing landscape means asking better questions. Who benefits? Who’s left out? What’s the real cost behind the convenience we’re creating? What tools are we missing to measure our impact? This is not about mastering AI, but keeping curiosity alive to drive positive change.
The workshop
Together with ClimateAction.tech, we brought together around fifty people from design, development, and strategy for a workshop called Designing Responsibly in the Age of AI. It was part reflection, part reality check. A space to pause and ask what responsibility looks like when technology starts to think alongside us. The session invited industry professionals to explore their own influence, reflect on their craft, and connect their design decisions to their wider impact on people and the planet.
Design holds a quiet power here. Every word, every interface, every technical choice shapes how people experience and trust these systems. That kind of influence doesn’t live only in research labs or boardrooms, but belongs to everyone building what comes next.
During the workshop, seven themes emerged. These themes were centred on values that people want to hold on to and integrate into the design of AI. Here they are, ordered by how strongly they resonated across the group:
Transparency builds trust
The word that came up most in the breakout discussions was transparency. People want clarity about what’s happening behind the interface, how AI makes decisions, where data comes from, and what the trade-offs are. When things are clear, people lean in, they stop feeling manipulated, and start feeling informed. The problem is that most digital products still hide their data-hungry logic behind smooth copy and friendly tone. Honest design can do better than that and help people see what’s really going on, without overwhelming them.
Curiosity drives sustainability
Sustainability was also a topic that was important to participants. And not just in the environmental sense, but in how we think, plan, and build products. AI looks weightless, yet it runs on vast networks of servers that burn through energy and resources. Many participants described a mindset of impact awareness, pausing to ask who benefits from their work and who/what gets left behind. That kind of curiosity is important and changes things. Once you start tracing the chain of cause and effect, you can’t unsee it. You start to question your defaults, and in the best-case scenario, curiosity turns sustainability from a buzzword into a habit.
Accountability belongs to everyone
Responsibility came up again and again throughout the session. People spoke about moments where they’d challenged a rushed decision, questioned a data source, or opened a conversation others might have avoided. No one in the group wanted to wait for new policies to appear. They wanted to act now in project meetings, in design critiques, in everyday choices. Still, there are clear barriers to putting this into action. Many felt that accountability often gets lost in shared ownership (when everyone is responsible, it can start to feel like no one really is). Others mentioned the usual suspects: deadlines, delivery pressure, or the sense that their influence stops at a certain level. What people asked for most wasn’t motivation, but structure, ways to track decisions, reflect as a team, and make accountability visible in the work itself.
Bias has a quiet, big power
Bias is at the heart of responsible design, yet it often hides in plain sight. Many people in the group admitted they notice bias most clearly when it’s personal, when a dataset excludes them or when a system misrepresents something they care about. What stood out was how bias hides quietly in defaults, assumptions, and language. And it slips through when deadlines take over or when teams all think the same way. Talking about bias can be uncomfortable, but it’s also where change starts. Recognising it doesn’t mean we failed, but that we’re paying attention.
Community makes it real
Community turned out to be the glue holding all these values together. Collaboration was mentioned many times, with people sharing stories, comparing mistakes, and figuring things out together. At the end of the session, many participants made commitments to share knowledge through everyday conversations with teammates, project partners, and clients. That’s how responsible design spreads. One chat, one question, one act of care and generosity at a time.
Technology serves humanity
While transparency, sustainability, and accountability dominated, other values surfaced quietly too, such as empathy, learning, courage, and fairness. People spoke about workers’ dignity, justice in benefit-sharing, and the need for public oversight in how AI is used. These reflections carried real depth, highlighting the human side of responsibility that so easily gets overshadowed by metrics and progress. They touched on how we treat one another, how we share knowledge, and how we make room for care in systems built for speed. Put simply, they all pointed to the same truth: technology should serve humanity, not the other way around.
Conclusion: Responsibility gives us agency
AI reflects the people who build and use it so, if we want it to serve humanity, we need to act like humans, questioning, experimenting, and caring. The influence we each hold may feel small, but it compounds. Our choices, such as the language we use, the systems we shape, the energy we consume, ripple outwards. And when we work collectively, those ripples start to look a lot like change.
Continue the conversation in our #greener-ai Slack channel! To join future Responsible AI events, subscribe to our calendar on Luma or Google.