
Wouldn’t it be nice if we didn’t have to think about this? Imagine going to work knowing we’ll make money, and help a company make money — and trusting government regulation and taxation will keep our work on the right side of history.
Wouldn’t it be nice for companies to know they don’t have to “out-ethics” their competitors? That they can focus on doing what businesses do well — creating goods, services, jobs, and money — within the bounds of regulation binding not only them, but also the competition?
This post isn’t directly concerned with tech’s impact outside the US. But it’s concerned with some of the largest tech (and non-tech) companies impacting life around the globe, so it feels relevant.
And besides: it’s timely, and I have a ~thoughts~ from my own years of experience 😉.
OpenAI recently announced a new internal oversight board, tasked with giving “recommendations on critical safety and security decisions for all OpenAI projects.” If this is the first you’ve heard of a big, moneyed tech company installing some flavor of “ethics board” to ensure it doesn’t run roughshod over societal norms of fair labor, legal standards, not-killing people, and so on, then you might be forgiven for believing OpenAI board might actually accomplish this.
But if you’ve paid any attention to the goings-on in Big Tech over the past decade (and not just Big Tech, but Big Business writ large), I hope your eyes rolled as hard against the backs of their sockets as mine did when you heard OpenAI’s announcement.
Astute readers steeped in Silicon Valley “culture” may complain here that OpenAI wasn’t actually, technically, founded as a for-profit company. And therefore that their internal oversight board will really(!), truly, pinky promise, be different.
For one, I doubt it. Everything about OpenAI besides its fuzzy nonprofit status suggests it’s the Current Hot Thing in Silicon Valley, and nothing stays Hot and Current and Disruptive there without heaps and heaps of money getting thrown at it.
And for two, it doesn’t really matter. Maybe, maybe, OpenAI’s board will be the first in a long while to actually wield real power, to hold the ability to turn a great corporate ship — even, if needed, away from the guiding lights of Growth and Profit.
But even if it does, it would be the first, and last, in a long while. In 2019, Google dissolved an AI ethics board only a week after starting it, and in 2020 blocked publication of an internal research paper highlighting the risks of large language models. The company this January split up yet another internal AI ethics team, following Meta’s lead last fall and Twitter’s the year before. And numerous big tech companies cut DEI programs last year after 2020’s myriad promises.
The track record, to say the least, isn’t good.

Years ago, I attended a conference in Berlin discussing technology, society, and ethics. A young German woman in a workshop there — a philosophy Masters student, I believe — proposed that big tech companies should hire… philosophers. These employees could, she argued, look at the external harms products might create, and steer them in more ethical directions.
I thought her idea was great — in theory.
But I’d seen it fail in practice so often I couldn’t keep from objecting. I explained that I worked at Google, as a user researcher on a number of “social good” projects — perhaps the closest role I can imagine to the philosopher one she proposed. My primary job was to nudge teams toward developing products in more “user-centric” directions. And while the recommendations weren’t always in service of greater long-run good for society, they were often directionally right.
Often these recommendations were adopted. But sometimes they were not — especially when they clashed with “business interests,” those twin drivers of growth and profit. It’s one thing to have philosophers in a company, and quite another to empower them to overwrite the incentives driving nearly every large corporation — and nearly every team, performance metric, and promotion decision within it. Incentives that, at least for large, public companies like Google and Meta, are in fact legally mandated.
And the German student seemed to understand. She quickly saw how real social change won’t come from within big American companies, because businesses are always gonna business.
So why can’t we in America understand the same? Why, especially, can’t our media, and government?
It’s not that I believe large corporations are incapable of doing any good for society, nor that they should be given free reign to do whatever they’d like.
It’s that businesses don’t exist for the sake of social good: they exist to create jobs, and to turn a profit. Especially, again, if the company is publicly-held.
With the exception of a few relatively small B Corp / Public Benefit Corporations like REI, Patagonia, and Ben & Jerry’s, social good in society should be the work of nonprofits and the government. And if those aren’t doing a good job of it, then we should do something about it.
As Scott Galloway recently said,
“All of this corporate social responsibility… has done nothing but distract the populace from the need to regulate these companies. And all of this notion that we keep hoping that capitalism can serve as an engine of change and for social good — we’ve had forty years to do that….
The majority of companies will do what they need to make more money, full stop. Ford would still be pouring mercury into the river if we allowed them to do that. The trust and safety team of OpenAI should be in Washington, DC and have the ability to tax, prosecute, and imprison.”
It seems we’ve somehow forgotten this. (Though Lina Khan and the Biden administration do seem determined to make up for lost time 💪.)
In a way, those of us in the corporate world should see this as freeing. During my last few years at Google, friends and I would sometimes fret, wondering if we were just part of the problem.
And I remember my grad school roommate years ago arguing with her boyfriend about whether or not it was ethical to join an oil company. They were both earning geology PhDs, and oil exploration jobs are much more lucrative than academic ones. My roommate thought it wrong to help a company extract more oil from the earth, but her boyfriend thought a right-minded individual in an otherwise corrupt company might help steer it in a better direction.
He wanted to be the oil company’s philosopher. Just as the German student proposed.

Wouldn’t it be nice, though, if we didn’t have to think about this? Imagine going to work knowing we’ll make money, and help a company make money — and trusting government regulation and taxation will keep our work on the right side of history.
Wouldn’t it be nice for companies to know they don’t have to “out-ethics” their competitors? That they can focus on doing what businesses do well — creating goods, services, jobs, and money — within the bounds of regulation binding not only them, but also the competition?
It’s for this very reason we have strict guidelines in football and hockey, basketball and boxing: certain hits are okay, while others are not. Within those bounds, go nuts.
How might American football be played without referees? Most fans would of course prefer the teams avoid extreme violence. They’d likely police themselves, and set and follow some kind of voluntary ethical guidelines.
But at the end of the day, fans want their teams to win. And if that requires they break through those self-enforced, nice-to-have rules once in a while?
Well, then.
Song of the Week: Billie Eilish — TV (Maybe I’m the Problem)
Another day, another Scott Galloway quote. I'm really glad you wrote this and you are on point with that quote and your (may I be hyperbolic?) hatred of corporations self policing/regulating. I don't know about most fans preferring to avoid extreme violence, ever been to the coliseum? Of course you have.