What Role Will This Election Play In AI?

As we head to the polls today, there’s a lot on the line for the future of AI in the U.S. Both Kamala Harris and Donald Trump are pushing for America to lead in tech, but they’re taking pretty different routes to get there.

Harris is backing Biden’s Executive Order on AI, which focuses on building trust in AI through safety, transparency, and fairness regulations. Her goal seems to be protecting the public from AI risks while ensuring it serves society positively. On the other hand, Trump is pushing for fewer regulations to let companies innovate faster, hoping that less federal control will drive quicker AI advancement and market competition.

So, with these two different visions on the table, how do you think this election could affect the future of AI here in the U.S.? Are we heading toward more public trust and safe AI—or will the drive for innovation speed and competition take the front seat? And how should we, as a community, be thinking about these shifts and maybe even preparing for what’s next?

1 Like

I’d argue that we need more regulations, especially because these foundational years of AI are so crucial. If we don’t put boundaries in place for the builders of these machine learning models and AI programs—given their enormous social and economic impact—then the perspectives of a few developers could end up influencing the thoughts and lives of billions.

I’m focusing on efforts by Meta, China & the likes of Tinybox to decentralise AI.

The world is moving to LLM in other languages. OpenAI is opening an office in Singapore with that one goal in mind for the whole of Asia.

Dell, Intel, Google, etc have in recent months made Malaysia as the No.1 destination for their servers.

US probably have too many baggage and archaic regulations (think Spotify rise from Sweden)… your tech company is already moving and finding cooperation in other parts of the world.

So to answer your question, it doesn’t matter who win in your election. You should prepare by widening your world view and do like what your AI tech company is doing now, prepare to move.

1 Like

Thank you for sharing that perspective! I agree that the AI landscape is rapidly decentralizing, with companies like Meta, Tinybox, and others expanding into regions like Singapore and Malaysia to optimize growth and infrastructure. However, I’d argue that strong regulatory guidance, especially from the U.S., remains crucial during these foundational years of AI development.

As a world leader in technology, the U.S. has historically set standards that influence global practices, from innovation to ethical frameworks. This role is essential because, as the U.S. progresses in tech, so often does the rest of the world. Without balanced regulations—ones that protect and adapt to innovation—AI’s development may be steered primarily by competitive market interests, potentially overlooking critical aspects like public safety, ethical considerations, and equity in how AI benefits different communities.

While broadening our view and embracing international collaboration is vital, a regulatory framework here in the U.S. that upholds responsible and ethical AI can set a standard for the global tech community. To me, this isn’t about stifling progress; it’s about ensuring that AI remains an inclusive and positive force worldwide, especially given the profound impact it has on economies and societies alike

1 Like