Developing the path of scientific enquiry. Is scientific enquiry a path? Examination of the boundaries of methodology, measurement, reason, revelation. Is this a new alchemy? This blog is a companion to the book
The Path of Scientific Enquiry

Email Mandtao:-

For details on new blogs follow me on twitter.

Mandtao Blog Links page

Racist AI

A bit of a buzz this morning - curtailed meditation. It started with Brian, Hanoi and artificial intelligence. AI will be part of the path of scientific enquiry as will (might) become clear.

Not true - started with Safiya Noble and her talk on "Algorithms of Oppression". In this she strongly indicated how search engines are racist and sexist. Now this is because of how I assume the search engine algorithm works - I don't know how it works, and there is a certain amount of secrecy about the algorithms because business wants to be number one as advertising.

Fundamentally search engine rankings are based on what is loosely called the market, the most visited sites are highest up the rankings - assuming nothing too nefarious. Google is an advertising business, high rankings means advertising revenue. It is not based on human values eg the most creative has no component within the Google model.

What are the implications of this marketing model? I contend that marketing and bell hooks' wonderful "white supremacist patriarchal society" are symbiotically linked. I am quite happy to say as I am not a scientist that marketing is white supremacist and patriarchal.

But I don't want to get too bogged down in language because bell's language turns the ignorant off. Marketing is the 1%-system might not raise so many hackles, it is not a huge leap to see that owners of big companies selling products are interested in marketing. And what are the characteristics of the 1%-system? Quite simple, the 1% exploit the 99% for profit. Not a biggy to say that. And it's not a big jump to then say that marketing exploits the 99%.

For me that does not need explaining but let's examine it anyway. I need to buy so I have to learn about what choices I have to buy. Let's take food. I remember a TED talk, I think Dean Ornish, in which he describes the supermarket shelf as having many names but no difference in food choices. I know as a person who would love to eat 100% organic that I cannot go to a supermarket and do that. In other words in a supermarket I cannot choose to eat healthy; I cannot eat healthy although my choices can improve my diet. Market apologists turn round and say people are not choosing organic so it is not available. To counter that I argue that people are conditioned not to eat what is good for them, and we end in confrontation.

Marketing fashions what we know is available, and the results of search engines are based on the fashioning of the marketers. How can this be changed? Regulate search engines?

Search engines reflect the market, search engines reflect society. And that is why search engines are racist and sexist as Safiya Noble says. The algorithms reflect the way humanity acts. I did not say it reflects the way humanity is, and I will get into that later - that is the nub of AI and the path of scientific enquiry.

AI has a similar problem, and we come to Hanoi. In this clip (finish at 74m), he describes AI learning models as basically models that synthesise universal data collection. To learn about cats AI trails the net for all that there is to know about cats and then synthesises some kind of understanding of cats. Put simply AI learns from all that there is to know about cats - good and bad. Sounds reasonable.

So what about learning models concerning race? Based on universal data collection in 1%-Trump-world, what kind of racist is our AI machine? What kind of sexist?

So what happens to those humans who do not want to be racist or sexist? If we can understand the answer to this, can we then add it to our AI learning model? Racism is conditioning, if we assume we are all born equal then it is simply conditioning. So as humans we unlearn our conditioning to stop being racist. This unlearning process is difficult, and has many stages of understanding including language, removing false delusions and removing institutional biases (institutional racism). But there is still more. Around us racist conditioning continues so we have to counter the continuing conditioning processes.

But ultimately we need detached minds that will prevent us from sinking back into the conditioning. And where do we get such a detached mind. One way is meditation although for some such a detached mind could be natural.

Of course not all would agree that anti-racist process is what all humans should be striving for.

This anti-racist model of unlearning could be written in stages as:-

Removing false delusions
Removing institutional racism
Avoiding reconditioning
Remaining detached

How can AI be conceived with these stages?

Language is easy to stop if we choose.

False delusions becomes harder. Some delusions are clearly false. Scientific data exists to remove the 19th century racism that black brains are smaller. Black people deserve equal opportunity, but some might question whether they get such equality. "Blacks are taking our jobs." becomes a little harder, because of the term "our jobs". I would argue that the 1% are taking our jobs, and that there are enough jobs for all. So maybe the delusion is caused by institutional racism.

Avoiding reconditioning might be easy for AI. If a pattern of conditioning has been recognised it would be easy not to follow such conditioning again. But what is conditioning? And there we have a problem because what I might describe as conditioning is not the same as how others might describe it.

And as for detachment how can a robot be detached. What is human detachment? As a human I might be able to remain calm and detached but how do I then describe what I am doing?

If detachment is achieved through meditation then that is impossible for a robot. Natural detachment becomes difficult to describe, and if it difficult to describe how can it be "perceived" as AI?

For me the main issue with AI are political questions. I accept that we live in a 1%-system in which humans function as consumers for the express purpose of increasing 1%-profits. If that is accepted what impact will robots have on consumerism. The second political question is the Oppenheimer question. Scientists might well define AI limitations but will those limitations be what the 1% want and will they accept it?

But those political questions are not the main part of the Mandtao path of scientific enquiry although financial and political awareness are always part of any path. For the Mandtao path that issue is what AI cannot do?

In "Science set free" Sheldrake discusses his 10 core scientific assumptions/questions (here), and suggests that one science assumption is that it can explain everything given time. Extrapolate that, and we have can AI perform all that humans can given time? That enquiry is the path, part of the path of scientific enquiry.

"mind/brain frustration" <-- Previous Post "Bush Mechanics" Next Post -->
Books:- Treatise, Wai Zandtao Scifi, Matriellez Education. Blogs:- Matriellez, Zandtao.