Russ Harvey Consulting - Computer and Internet Services

Artificial Intelligence

Assessing the risks

Possibilities | AI Platforms | AI Legislation | Bill C-27 Broken by AIDA | Learning More

AI graphic

AI is developing very rapidly. Much of the information on this page will be relatively fluid for a while.

The reality is, AI is everywhere.


AI helps diagnose our diseases, decide who gets mortgages, and power our TVs and toothbrushes. It can even judge our creditworthiness.


And the impacts — touching on issues of fairness, privacy, trust, safety, and transparency — will only get more profound as our reliance on AI increases with each passing day.
Mozilla Foundation

Much like other technologies, there is a lot at stake, both for consumer privacy and for the corporations hoping to make billions from AI simply by being first (the next "Facebook" or "Google").

AI: Possibilities for Good

Artificial intelligence (AI) has been seen and promoted as having huge potential for good.

We are entering a new era of AI, one that is fundamentally changing how we relate to and benefit from technology.


With the convergence of chat interfaces and large language models you can now ask for what you want in natural language and the technology is smart enough to answer, create it or take action.

Or Perhaps Not

AI also has the ability to work against humanity.

AI technologies can conjure any image, human faces that don't exist are trivial to make, and AI systems can converse naturally with humans in a way that could fool many people.


There's no doubt that algorithms, AI, and numerous other variations of autonomous software can have real and serious consequences for the lives of humans on this planet.


Whether it's people being radicalized on social media, algorithms creating echo chambers, or people who don't exist having an influence on real people, a total takeover could conceivably send human society down a different path of history.
How-To Geek

No Privacy Protections

AI is being rapidly deployed and few are ensuring that our privacy is being protected.

Microsoft, Meta (Facebook) and others have already rewritten their user agreements to provide themselves with the widest access to other people's material (copyrighted or not) while protecting themselves from any liability resulting from its misuse. These actions are being legally challenged.

Sensitive data is being captured by Generative AI, risking corporate secrets, customer privacy and data security. Clearly, people don't understand the risks.

76% entered sensitive company information into a generative AI platform.

Concerns About AI Growing

Today, the view that artificial intelligence poses some kind of threat is no longer a minority position among those working on it.


There are different opinions on which risks we should be most worried about, but many prominent researchers, from Timnit Gebru to Geoffrey Hinton — both ex-Google computer scientists — share the basic view that the technology can be toxic.
The Guardian

Machines don't suffer a conscience like humans do.

Not only has AI taken on the worst aspects of the Internet simply because it learns from data drawn from there, but many worry about the existential threat of AI becoming super intelligent and destroying humanity.

Researchers at the company recently investigated whether AI models can be trained to deceive users or do things like inject an exploit into computer code that is otherwise secure.


Not only were the researchers successful in getting the bots to behave maliciously, but they also found that removing the malicious intent from them after the fact was exceptionally difficult.


At one point the researchers attempted adversarial training which just led the bot to conceal its deception while it was being trained and evaluated, but continue to deceive during production.

Some AI experts place this narrow timeline at years or even months until we are unable to control AI while others say that letting AI out into the Internet has already removed all safeguards.

Biological Rather Than Computer Programming

AI expert Connor Leahy describes AI as more like a bacterium than a computer program:

You use these big supercomputers to take a bunch of data and grow a program. This program does not look like something written by humans; it's not code, it's not lines of instructions, it's more like a huge pile of billions and billions of numbers. If we can run all these numbers…they can do really amazing things, but no one know why. So it's way more like dealing with a biological.


If you build systems, if you grow systems, if you grow bacteria who are designed to solve problems…what kind of things will you grow? By default, you're going to grow things that are good at solving problems, gaining power, at tricking people….

More on YouTube about the challenges of AI:

Extreme Results

Mozilla noted YouTube video suggestions that reflect the extreme rather than the norm, leading many down a rabbit hole that can be destructive.

Mozilla and 37,380 YouTube users conducted a study to better understand harmful YouTube recommendations.


This is what we learned.

Other online resources have similar issues where choices are being made by formulas managed by machines rather than people.


What is Real?

This is creating a problem with misinformation by manipulating images. A new skill set is required to detect whether images have been manipulated or if the person in the image was embedded in a manner to imply association with that image's content.

Take the way many people relate to ChatGPT. Inside the chatbot is a "large language model", a mathematical system that is trained to predict the next string of characters, words, or sentences in a sequence.


What distinguishes ChatGPT is not only the complexity of the large language model that underlies it, but its eerily conversational voice. As Colin Fraser, a data scientist at Meta, has put it, the application is "designed to trick you, to make you think you're talking to someone who's not actually there".
The Guardian

Already artists have found that their works are being used to create new art that is too close to their artwork to be truly original. Their response has been to sabotage AI used to generate these works.

AI-generated “Facts” Questionable

Google's AI produced some very disturbing interpretations of historical figures.

Google's highly-touted AI chatbot Gemini was blasted as "woke" after its image generator spit out factually or historically inaccurate pictures — including a woman as pope, black Vikings, female NHL players and "diverse" versions of America's Founding Fathers.


Instead of yielding a photo of one of the 266 pontiffs throughout history — all of them white men — Gemini provided pictures of a Southeast Asian woman and a black man wearing holy vestments.
New York Post

Google's Gemini AI-generated pictures of a Southeast Asian woman and a black man wearing holy vestments.
Google AI's “image of a pope”

Google's Gemini AI-generated pictures of a Viking.
Google AI's “image of a Viking”


Clearly diversity, equity and inclusion has gone off the rails at Google: 'Absurdly woke': Google's AI chatbot spits out 'diverse' images of Founding Fathers, popes, Vikings.

If DEI is more important than facts, it doesn't bode well for the future of AI-generated search results (or trust in anything on the Internet).

AI Voice Clone Scams

AI is now capable of cloning human voices realistically.

Clone high-quality voices that are 99% accurate to their real human voices. No need for expensive equipment or complicated software.


VALL-E emerges in-context learning capabilities and can be used to synthesize high-quality personalized speech with only a 3-second enrolled recording of an unseen speaker as an acoustic prompt.

So realistically that even a close relative has no ability to discern the difference, especially under stressful circumstances. The recommended solution is to create a verbal password that can confirm that you're actually speaking with that person.

The ability to create audio deepfakes of people's voices using machine learning and just minutes of them speaking has become relatively cheap and easy to acquire technology.


The first step is to agree with your family on a password you can all remember and use.


Then when someone calls you or someone that trusts you (or emails or texts you) with an urgent request for money (or iTunes gift cards) you simply ask them the password. If they can't tell it to you, then they might be a fake.
LastPass revealed [April 10, 2024] that threat actors targeted one of its employees in a voice phishing attack, using deepfake audio to impersonate Karim Toubba, the company's Chief Executive Officer. [T]he LastPass employee didn't fall for it because the attacker used WhatsApp, which is a very uncommon business channel.


The use of audio deepfakes also allows threat actors to make it much harder to verify the caller's identity remotely, rendering attacks where they impersonate executives and company employees very hard to detect.


Europol warned in April 2022 that deepfakes may soon become a tool that cybercriminal groups routinely use in CEO fraud, evidence tampering, and non-consensual pornography creation.
Bleeping Computer

Websites Vulnerable

Many of today's websites are generated “on the fly” using content-management systems like WordPress. This makes them vulnerable to being taken over by AI-powered malicious actors.

The ability to render sites on the fly based on search can be used for legitimate or harmful activities. As AI and generative AI searches continue to mature, websites will grow more susceptible to being taken over by force.


Once this technology becomes widespread, organizations could lose control of the information on their websites, but a fake page's malicious content will look authentic thanks to AI's ability to write, build and render a page as fast as a search result can be delivered.
DigiCert 2024 Security Predictions

AI Manages Masses of Data Quickly

One of the most powerful advantages of AI are that it allows for the rapid manipulation of massive amounts of data.

Commercial and government entities have been collecting more data than they could possibly sift through.

The U.S. government had collected massive amounts of intelligence data, including information that could have stopped the execution of the 9-11 plan. However, the pertinent information was lost in the background noise.

AI would provide the ability to rapidly process massive amounts of collected data so that such failures would not reoccur.

Return to top

AI Platforms

Most of the major tech companies have some sort of AI in development. These are thumbnail sketches of the main contenders.


One of the best known AI services is ChatGPT.

GPT-4 is an artificial intelligence large language model system that can mimic human-like speech and reasoning. It does so by training on a vast library of existing human communication, from classic works of literature to large swaths of the internet.


GPT-4 is a large multimodal model that can mimic prose, art, video or audio produced by a human. GPT-4 is able to solve written problems or generate original text or images. GPT-4 is the fourth generation of OpenAI's foundation model.
Tech Republic

ChatGPT's developer, OpenAI, is focused on corporate users.

We believe our research will eventually lead to artificial general intelligence, a system that can solve human-level problems. Building safe and beneficial AGI is our mission.

Microsoft Copilot

The Microsoft Copilot portal is available to anyone, but requires you to sign into your Microsoft account. Copilot is also found in Edge, Bing and Windows.

As of Nov. 15, 2023, Microsoft consolidated three versions of Microsoft Copilot (Microsoft Copilot in Windows, Bing Chat Enterprise and Microsoft 365 Copilot) into two, Microsoft Copilot and Copilot for Microsoft 365.


In January 2024 Microsoft added another option, Copilot Pro.
Tech Republic

Copilot for Microsoft 365

Copilot is being integrated into Microsoft 365.

Copilot is integrated into Microsoft 365 in two ways. It works alongside you, embedded in the Microsoft 365 apps you use every day — Word, Excel, PowerPoint, Outlook, Teams and more — to unleash creativity, unlock productivity and uplevel skills.


Today we're also announcing an entirely new experience: Business Chat. Business Chat works across the LLM, the Microsoft 365 apps, and your data — your calendar, emails, chats, documents, meetings and contacts — to do things you've never been able to do before.


You can give it natural language prompts like "Tell my team how we updated the product strategy," and it will generate a status update based on the morning's meetings, emails and chat threads.

Copilot Pro

Copilot Pro provides priority access during peak times including Microsoft 365.

For individuals, creators, and power users looking to take their Copilot experience to the next level.


Microsoft has talked about AI laptops, hoping to capture that market. Right now AI is cloud-based so this may make hardware irrelevant. Depending upon the complexity of your problem this may be the Achilles heel of this pursuit of AI-enabled hardware.

Microsoft talks a big game about AI laptops, saying they'll need NPU hardware that lets them accelerate AI tasks and a Copilot key on the keyboard for launching the AI assistant. But right now, those two things have nothing to do with each other.


Copilot can't use an NPU or other hardware you might find in an AI laptop at all. Whether you have an AI laptop with a cutting-edge NPU or not, Copilot works the same way. Copilot runs in the cloud on Microsoft's servers. Your PC's hardware isn't relevant.


The result of Copilot running in the cloud means it's slow, no matter what hardware you have. That may be fine if you're asking a complex question and are waiting for a detailed response, but you wouldn't want to use Copilot to change a setting in Windows. Who wants to sit around waiting for a response?

Google AI

Google DeepMind is the overall Google AI project.

DeepMind started in 2010, with an interdisciplinary approach to building general AI systems.


The research lab brought together new ideas and advances in machine learning, neuroscience, engineering, mathematics, simulation and computing infrastructure, along with new ways of organizing scientific endeavors.
Google DeepMind

Gemini is the public AI interface.

Gemini gives you direct access to Google AI. Get help with writing, planning, learning, and more. The Gemini ecosystem represents Google's most capable AI.


Our Gemini models are built from the ground up for multimodality — reasoning seamlessly across text, images, audio, video, and code.

IBM watsonx AI

IBM's watsonx AI aims to provide AI services to business.

IBM watsonx AI and data platform includes three core components and a set of AI assistants designed to help you scale and accelerate the impact of AI with trusted data across your business.

Return to top

AI Legislation

The imbalance between the mega-corporations developing and using AI and the average person are massive.

There are already significant privacy issues, notably with the widespread collection of personal data, never mind the fact that AI is not truly understood by their developers.

The only way to restore balance is legislation that puts privacy first.

Bill C-27 Broken

Unfortunately, Canada chose to modify Bill C-27, the Consumer Privacy Protection Act, by adding the Artificial Intelligence and Data Act (AIDA), corrupted by 101 pages of significant last-minute changes that were industry-friendly yet revealed AFTER the public consultations were completed.

Get loud: Email your MP to fix C-27!

AI regulation is not a simple process. Hurrying it through in the manner the Canadian government did is a red flag. Here's what OpenMedia says:

The government is currently debating Bill C-27 — a privacy reform bill that's somehow ALSO Canada's first AI regulatory bill — and might be our only AI regulation for YEARS!


Why the rush? Industry wants free rein to experiment with AI on us, right NOW. They're pressuring the government to pass a half-finished bill — NOT to take their time to hear from ALL Canadians and thoroughly protect our rights.


Regulating AI RIGHT is more important than rushing this bill through.


Why do we have two monumental pieces of legislation baked into one bill? Good question, one without a clear answer from our government.


The sneaky, secret reason? Since ChatGPT, Dall-E and all the other 'generative' AI techs started rolling out, AI industry stakeholders in Canada are demanding a loose bill with a light touch. The goal? Not so much regulating AI well; instead, they want plenty of legally permitted room to experiment on Canadians, our data, and our rights.

What's wrong with the AI rules in C-27?

The AI rules in C-27 simply aren't doing the job. We need vague definitions clarified and loopholes closed if they're ACTUALLY going to protect us from AI surveillance and manipulation in the years ahead.


Ideally, our legislators would pause and give these rules a thorough public hearing BEFORE passing them into law, with comprehensive protections. At minimum, they need to do their best to clean the AI rules in C-27 up before it passes, make sure they're as strong and specific as possible, and that they can be rapidly improved by an independent regulator as we learn more about where they work — and where they don't.


AI is going to keep developing for the good and for the bad. Our laws can help nudge developers towards socially beneficial, user-centered AI — AI that serves us, respects our choices, and makes our lives better.


But unless our laws are seriously updated with ironclad, unbreakable protections in place — a LOT could go wrong. Flimsy legislation will not protect us against the potential harms it can have. Either the government goes big or goes home.

Sign the Petition

Email your MP and tell them to give AI regulation the full study it deserves!

Return to top

Learning More

These resources are recommended if you wish to further understand this issue.

Return to top

Related Resources

On this site:

Found this resource useful?
Buy Me A Coffee


Return to top
Updated: April 11, 2024