Taming Silicon Valley - Gary Marcus
- Victor Hugo Germano
- Mar 16
- 5 min read
This text was originally written in Brazilian Portuguese.
A quick and very practical book on what to do at a time when it seems that a new world order is taking place: BigTechs larger than most countries, acting as the new world rulers.
Gary Marcus is one of the world's leading, knowledgeable critics of technology - an AI expert and academic who seems to have managed to remain unscathed by the financial allure of AI: short-term thinking and get-rich-quick through financial speculation, rather than a long-term vision for developing AI that can be at least minimally trusted by the public.

The author himself posted a few days ago how the risk presented in the book has quickly become a reality . The premises of the book:
They are speculating wildly with LLMs and AGI, even though problems with hallucinations and lack of reasoning continue to persist even after years of study.
Governments have believed too much in this story, at the same time they are completely influenced by these companies, and intend to follow their direction closely.
We will end up with little or no regulation of AI, just the gigantic and confirmed risks of digital crime, misinformation, algorithmic bias, over-reliance on flawed systems, etc.
Our only hope as citizens would be to protest or even boycott generative AI.
The Paris AI Summit , which took place at the beginning of February, clearly demonstrated the alignment of governments with the BigTech accelerationist agenda: pressure for reduced regulation, with the justification of national security or impeding innovation. The discourse is the same: “we will fall behind if we do not accelerate now” . It is the same type of tactic that tobacco companies used a few decades ago, lobbying governments, funding research and political campaigns to reduce public scrutiny.
Today, one industry has been using LLMs to the extreme to expand its reach with great success: Cybercrime. Whether through political disinformation, extortion networks with catfishing, digital scams with deepfakes, LLMs are good enough for malicious actors to exploit social vulnerabilities and make a lot of money. There are already countless cases, and they continue to grow.
The author, who has been writing for years about how LLMs are unreliable, presents many arguments for us to be concerned about the use of artificial intelligence, which need to be mitigated in some way. The author presents numerous real risks of AI, many of which have already been discussed numerous times on this blog, by other authors. Among them:
Automated disinformation warfare
Market manipulation
Hallucinations that cause real problems (health, career, products)
Non-consensual deepfakes (pornography, smear campaign)
Exploiting security vulnerabilities
Digital Bias and Discrimination
Privacy and Information Leakage
Intellectual Property Theft
Trust in untrustworthy systems
Environmental costs
Poorly finished products full of vulnerabilities: this is the reality of today's market. With the existential accelerationist excuse (“world leadership is at stake”), we have become guinea pigs for an unreliable technology that can put people's lives at risk. In the search for speculative money, the population is the one who pays the price.
It is already more than confirmed that LLMs depend on data with non-consented intellectual property. This is the political way of saying: LLM only exists with stolen data . Artists and authors are being exploited through their works, which are used to train generative models without the works being properly licensed for use. This may be the first battlefield against Bigtechs.
Even Sam Altman, testifying before the European Commission, said that there is no such thing as OpenAI and ChatGPT without the illegal use of intellectual property . Now it is up to governments to hold these companies accountable for their actions. This is the same reason why not even models like DeepSeek R1 open source the datasets used to train their open source models. All companies are guilty.
Transparency, Regulation and Supervision
The book addresses the issue of control and regulation of BigTechs as crucial to ensuring the development of safe and reliable AI.
As artificial intelligence (AI) continues to reshape our world, the question of how to effectively regulate tech giants has become increasingly urgent. Are these companies simply too big to regulate? Parallels with other industries suggest not, but the unique challenges presented by AI and big tech companies require a differentiated approach.
Just as the tobacco industry was excluded from lobbying due to its conflict with public health interests, big tech companies often prioritize profit over social welfare. Unlike telecom companies, which act as neutral conduits of communication, social media and tech giants actively shape our digital experiences:
Timeline algorithms optimize for engagement and polarization
Inflammatory content is amplified to drive ad revenue
Companies control user data and select interactions
Max Fisher's book The Chaos Machine describes numerous cases of how BigTech manages to manipulate our perception of the world, and how we are subject to its influence. The interests of big technology companies diverge significantly from the public good.
The Need for Comprehensive AI Governance
Gary Marcus and Amy Webb have advocated for the establishment of global AI governance structures. Marcus proposes an AI Agency modeled after the FCC and FDA, although implementing such initiatives faces significant challenges:
Nations are reluctant to cede sovereignty to internal decisions, despite ceding their autonomy to technology companies
AI companies have threatened to withdraw services if they are overly regulated (e.g. Sam Altman's comments on EU regulations)
Despite these obstacles, a coordinated global approach to AI governance remains crucial .
The Case for Regulation
Contrary to popular belief, well-designed regulations can benefit both society and businesses:
Regulatory clarity reduces chaos and facilitates long-term planning
Higher barriers to entry can protect established companies from competition (I personally believe this is why we have Sam Altman advocating regulation in the US)
Embracing regulation can improve public perception of AI companies
Current regulatory frameworks often fall short of what is needed . For example, self-driving cars operated by companies like Waymo cannot be issued traffic tickets when they violate laws, highlighting the need for updated legal frameworks. Not when they hit a person, not when they block traffic.
In this sense, I believe that Brazil is better prepared to deal with the situation. Although we still need to make a lot of progress , we already have a better understanding of the impact and responsibility of technology and platform companies.
To ensure that AI benefits society rather than just serving corporate interests, public engagement and activism are essential. By organizing and advocating for the responsible development of AI, we can work to avoid the pitfalls seen in the evolution of social media.
This book completes an interesting journey of AI literacy, and I believe that with it I feel quite capable of discussing the limits of this technology in an even deeper way.
I recommend reading it.
Comments