Now Bard is Going to Read Your Email. Wonderful.

Written by

Geoff Halstead

Published on

Reading time

3 min.

This week, Bard — Google’s competitor to ChatGPT — got an upgrade.

One new feature called Bard Extensions – – deemed “interesting” by many reviewers – allows the artificial intelligence chatbot to connect to a user’s Gmail, Google Docs and Google Drive accounts. Not only to access them, but to review, organize and even write your emails for you!

We don’t mean to beat on Kevin Roose at The NY Times – he’s a great technology writer. But we do want to point out how astonishing certain assumptions about what we should do with AI have become. For just a couple of examples, Kevin tested various tasks, such as having Bard read all his email and pyschoanalyze him, or actually read his inbox and automatically draft replies on his behalf. He was disappointed by how poorly Bard performed:

“The dream of an all-knowing A.I. assistant, capable of perfectly analyzing our pasts and anticipating our needs may be a ways off”.

Kevin Roose, New York Times

No doubt inspired by Tony Stark’s useful and lovable J.A.R.V.I.S. in the Iron Man Franchise it certainly sounds wonderful. When all of the technology and infrastructure required to power a J.A.R.V.I.S. are in your personal control – like Tony Stark has – that’s one thing. When this is all controlled by corporations and thus accessible to criminal hackers, adversarial governments and all manner of other bad actors, that is something else entirely. And not remotely good.

Back to the here and now, it is clear that we are not ready – or have even started to be ready – to address the massive privacy issues and risks of truly breathtakingly destructive actions that can come from such use of AI today. But we have clearly gotten to the point where the wisdom of such a thing is not even considered.

From our POV, if you went back even that short stretch of time, nobody would have agreed to this!

The advances in AI that are required to achieve that are sadly, far easier than the advances required in the ability for an end user to control this data and what Google does with it – and control J.A.R.V.I.S. for that matter! The former is just technology, which will evolve and advance at a rapid pace. The latter requires some changes in fundamental attitudes and behaviors that have become ingrained in us over the last few decades.

As Roose notes:

Google is well positioned to close that gap. It already has billions of people’s email inboxes, search histories, years’ worth of their photos and videos, and detailed information about their online activity. Many people — including me — have most of their digital lives on Google’s apps and could benefit from A.I. tools that allow them to use that data more easily.

Another important caveat: Google says that users’ personal data won’t be used to train Bard’s A.I. model, or shown to the employees reviewing Bard’s responses. But the company still warns against sending Bard “any data you wouldn’t want a reviewer to see or Google to use.”

Kevin Roose, New York Times

It’s important to say that we don’t intend to pick on Google here either. All of the leaders in the AI industry have been proactive in both articulating concerns and inviting scrutiny and regulation. But the fact is that if we give AI the power to suck in all of our data, impersonate us and start taking actions in digital world on our behalf, that ability can and will be hacked and hijacked.

Why? Because AI is completely and utterly centralized. And we know that anything in the Cloud that has centralized control can and will be successfully attacked. There is enormous and indisputable evidence for this, and none for the presumption here that such systems are defensible.

Of all the things that we need to do to regulate and control AI, this area requires probably the closest immediate inspection. It’s not hard to see how this falls quickly into the category of existential risk – first for unfortunate, individual humans, but then for thousands or the entire species. Human’s are notoriously bad at assessing risk, and the most powerful humans are now constantly looking for the easiest way to extend their own power and actions without the costs, inefficiencies and pesky inconvenience of other humans being involved.

Just think that one through for a bit, and perhaps you will agree it would be a good idea to slow down! Let’s hope others do, and soon.

Related posts

AT&T Admits that Data of “Nearly All” Customers Was Breached in 2022

Reading Time: 2 min.

The New York Times reported today that AT&T disclosed a significant data breach affecting nearly all of its customers. The…

Read more

NSA Report Details the Extent and Effectiveness of PRC Exploitation of the Internet

Reading Time: 2 min.

The NSA release this week a comprehensive report with explicit details of the extent of the activity and ‘Tradecraft” of…

Read more

Chrome Browser Revealed to Secretly Spy on PCs

Reading Time: 1 min.

Luca Casonato 🏳️‍🌈 on Twitter / X Developer Luca Casonato posted a series of tweets on July 9. He revealed…

Read more

Now Bard is Going to Read Your Email. Wonderful.

This week, Bard — Google’s competitor to ChatGPT — got an upgrade. One new feature called Bard Extensions – – deemed “interesting” by many reviewers – allows the artificial intelligence chatbot to connect to a user’s Gmail, Google Docs and Google Drive accounts. Not only to access them, but to…

Reading Time: 3 min.

This week, Bard — Google’s competitor to ChatGPT — got an upgrade.

One new feature called Bard Extensions – – deemed “interesting” by many reviewers – allows the artificial intelligence chatbot to connect to a user’s Gmail, Google Docs and Google Drive accounts. Not only to access them, but to review, organize and even write your emails for you!

We don’t mean to beat on Kevin Roose at The NY Times – he’s a great technology writer. But we do want to point out how astonishing certain assumptions about what we should do with AI have become. For just a couple of examples, Kevin tested various tasks, such as having Bard read all his email and pyschoanalyze him, or actually read his inbox and automatically draft replies on his behalf. He was disappointed by how poorly Bard performed:

“The dream of an all-knowing A.I. assistant, capable of perfectly analyzing our pasts and anticipating our needs may be a ways off”.

Kevin Roose, New York Times

No doubt inspired by Tony Stark’s useful and lovable J.A.R.V.I.S. in the Iron Man Franchise it certainly sounds wonderful. When all of the technology and infrastructure required to power a J.A.R.V.I.S. are in your personal control – like Tony Stark has – that’s one thing. When this is all controlled by corporations and thus accessible to criminal hackers, adversarial governments and all manner of other bad actors, that is something else entirely. And not remotely good.

Back to the here and now, it is clear that we are not ready – or have even started to be ready – to address the massive privacy issues and risks of truly breathtakingly destructive actions that can come from such use of AI today. But we have clearly gotten to the point where the wisdom of such a thing is not even considered.

From our POV, if you went back even that short stretch of time, nobody would have agreed to this!

The advances in AI that are required to achieve that are sadly, far easier than the advances required in the ability for an end user to control this data and what Google does with it – and control J.A.R.V.I.S. for that matter! The former is just technology, which will evolve and advance at a rapid pace. The latter requires some changes in fundamental attitudes and behaviors that have become ingrained in us over the last few decades.

As Roose notes:

Google is well positioned to close that gap. It already has billions of people’s email inboxes, search histories, years’ worth of their photos and videos, and detailed information about their online activity. Many people — including me — have most of their digital lives on Google’s apps and could benefit from A.I. tools that allow them to use that data more easily.

Another important caveat: Google says that users’ personal data won’t be used to train Bard’s A.I. model, or shown to the employees reviewing Bard’s responses. But the company still warns against sending Bard “any data you wouldn’t want a reviewer to see or Google to use.”

Kevin Roose, New York Times

It’s important to say that we don’t intend to pick on Google here either. All of the leaders in the AI industry have been proactive in both articulating concerns and inviting scrutiny and regulation. But the fact is that if we give AI the power to suck in all of our data, impersonate us and start taking actions in digital world on our behalf, that ability can and will be hacked and hijacked.

Why? Because AI is completely and utterly centralized. And we know that anything in the Cloud that has centralized control can and will be successfully attacked. There is enormous and indisputable evidence for this, and none for the presumption here that such systems are defensible.

Of all the things that we need to do to regulate and control AI, this area requires probably the closest immediate inspection. It’s not hard to see how this falls quickly into the category of existential risk – first for unfortunate, individual humans, but then for thousands or the entire species. Human’s are notoriously bad at assessing risk, and the most powerful humans are now constantly looking for the easiest way to extend their own power and actions without the costs, inefficiencies and pesky inconvenience of other humans being involved.

Just think that one through for a bit, and perhaps you will agree it would be a good idea to slow down! Let’s hope others do, and soon.

If you liked this post, Share it on: