Safeguarding AI Technology

AI technology has taken the world by storm, bringing with it misconceptions about its impact on jobs and concerns about ethical implementation. In this interview, Mark Basa, an advocate of Bittensor, an open-source AI system, explains why open-source AI is the way to promote transparency, remove biases, and accelerate innovation. 

What is the biggest misconception the public has about AI and its impact on jobs?

The AI narrative tends to focus too much on the idea of AI taking jobs and “killer AI”. In reality, while AI will disrupt industries and change the world, it will also create many new opportunities. Open-source AI protocols such as Bittensor enable developers to build their own AI-powered applications and sell them directly to users. This will lead to increased competition and innovation in a free market where Bittensor has the chance to compete with megalithic companies like Google and Microsoft and possibly beat them in how they incentivise their employees to develop AI. Bittensor is designed so that an engineer at a big tech company now has the chance to contribute machine learning (ML) models to subnets and receive a reward that is determined by the quality of that model. This simple solution enables an application built on Bittensor to incentivise an engineer to contribute to ethical and open-source AI, all while matching and outweighing the attractive salaries the same engineer would receive at Google. 

What are the benefits of an open-source approach to AI?

Centralised AI controlled by a few big tech companies raises concerns about responsible and ethical implementation. Open-source protocols like Bittensor democratise AI by empowering independent engineers worldwide to build on top of the network. This decentralised approach promotes transparency, mitigates risks, and accelerates innovation through open collaboration. This also means that engineers from all over the world from different backgrounds get to contribute to how AI is being developed and how it provides information. This contrasts with centralised AI, which is often proprietary, not open-source, and driven by profit-oriented agendas regulated by government bodies that do not understand the technology, fueled by one outcome: the price of stocks. How can the public expect true, ethical AI to be produced if the very models that are supposed to make our lives better, are in fact not built by the very people themselves, but instead have an end goal of profit?  

How will open-source AI change the way people use AI services in their daily lives?

Open-source AI will make powerful AI capabilities vastly more accessible and customisable for consumers. Instead of AI services being controlled by a few dominant players, we’ll see an explosion of diverse AI apps tailored to niche needs powered by protocols like Bittensor. People will benefit from AI-powered tools for everything from managing schedules and errands to getting personalised recommendations and insights. The key advantage of open-source protocols like Bittensor is that they offer users greater choice and control over their AI experiences. Projects like Taoshi and Corcel revolutionise various industries by democratising the development and utilisation of AI technology. These platforms empower individuals to create and contribute to a network that rewards innovation and participation. As a result, the world’s best engineers from major tech companies are incentivised to leave their positions and build independent applications. This shift fosters a competitive environment where smaller applications can challenge established giants like Google. The core disruption stems from a transition to a model where engineers directly serve and are compensated by a user-driven network, rather than a centralised corporate structure. This not only accelerates competition but also promotes a more equitable and open technological ecosystem.

Why do journalists often fail to accurately report on AI and blockchain’s ethical, safety, and risk aspects, and why should experts lead such discussions?

Open-source AI will make powerful AI capabilities vastly more accessible and customisable for consumers. Instead of AI services being controlled by a few dominant players, we’ll see an explosion of diverse AI apps tailored to niche needs powered by protocols like Bittensor.

The rapid pace of technological change in AI presents a challenge for journalists. Many lack deep domain expertise in these complex fields. As a result, mainstream coverage often mischaracterises or oversimplifies the societal implications of these technologies. To improve public understanding, we need more specialised journalists who can responsibly translate these advances for a general audience. There is an enormous disconnect between new technologies, how the media report them, and how they are understood by the public. New technologies develop at such a rapid pace that it is impossible to remain an expert for long because new competition arises, leaving the media scrambling to find a scoop rather than provide insightful information from where the action is happening within smaller more groundbreaking AI start-ups who often have far less brand equity, and therefore they are perceived as not being experts in their field. Contrary to popular opinion, Google and Meta are no longer agile enough to be truly innovative companies. They have their gigantic core business and then acquire smaller companies with a growing market share which is in their best interest (margin-wise), to own that company, rather than enter that market, innovate, and then compete with it. This leaves the media, who often do not hire journalists with AI expertise, interviewing the big tech brands they know rather than anything else that could truly be groundbreaking and innovative. VCs have poured billions into smaller, less-known protocols, and the founders of these protocols are unfortunately often neglected when AI expertise is needed for a quote in a major story or to cover the innovative technology they are building.

How can we ensure that underrepresented communities are adequately represented in AI oversight processes, and what benefits might this bring to AI development?

There is no barrier to entry to begin using open-source AI. Protocols like Bittensor enable anyone from anywhere to develop ML models, run validators, and mine, with end-to-end support and encouragement from the community. Open-source protocols have a verbal mandate for transparency and public input processes for how the protocol is designed, managed, and scaled, where feedback is taken seriously based on the most pressing issues. This ensures that AI projects using the system are held accountable and early detection of scams, biases, and projects misleading users can quickly be illuminated as the entire community is actively seeking the growth of the system and elimination of anything that can hold it back. For example, if a bias is input in a model or miners are trying to exploit the system, they face community-driven coordination to cast out the bad actors. 
 
Could you explain how an open ownership model for AI could contribute to safeguarding the technology for public good, rather than private gain? 
 
Open ownership of AI systems like Bittensor could help ensure that the technology is developed and used in an ethical manner that benefits humanity as a whole, rather than being controlled by a single company or group with their own interests in mind. By distributing ownership and control across many different parties, open ownership makes it harder for any one entity to monopolise the technology or use it to exploit or harm people. Big Tech will lobby Congress to regulate AI, and the solution will of course be that the only safe and trusted AI will be the AI they own, control, and profit from. With open-source protocols, more transparency and accountability are possible thanks to the public contributing to how the AI is developed and used. At the same time, open ownership may come with challenges around coordination, governance, and ensuring quality control. But overall, I believe the open ownership model has the potential to make AI a more positive force for the world.

Executive Profile 

Mark Basa

Mark Basa co-founded a Bitcoin payment gateway between 2011-2013 that was supported by Microsoft and the Australian Government incubator programmes. He co-founded a web3 game studio backed by Japan’s largest VC and has experience as brand director of a layer-one blockchain. He has appeared on television and earned coverage such as Cointelegraph, Daily Telegraph, The Guardian, Reuters, Yahoo Finance, and Express.co.uk, to discuss blockchain.