|Mandtao Blog Links page|
GDPR - some AI protection
In Europe a law, GDPR (General Data Protection Regulation), has been passed, here is what Fortune says the crux of the law is:-
When they collect personal data, companies have to say what it will be used for, and not use it for anything else.
Companies are supposed to minimize the amount of data they collect and keep, limiting it to what is strictly necessary for those purposes - they're supposed to put limits on how long they hold that data, too.
Companies have to be able to tell people what data they hold on them, and what's being done with it.
Companies should be able to alter or get rid of people's personal data if requested.
If personal data is used to make automated decisions about people, companies must be able to explain the logic behind the decision-making process.
This is another article in which Forbes is warning business about GDPR. Here is a PR video about EDPS strategy, people behind GDPR.
This GDPR is EU stuff and they are focussing on privacy, the reaction of Forbes makes me think that they are attacking US BigData - but that is a minimally-informed opinion. Here, Tim Cook of Apple calls for similar in the US. Here is a Diem25 discussion of GDPR, and what I discussed here about detoxing your internet. Diem25's view of Cambridge Analytica is that they followed BigData's norms, the norms of the Data Industrial Complex, and were a sacrificial lamb rather than a rogue exception. This is a future policy discussion for EDPS - internet as "free infrastructure".
What we have to understand as ordinary users is that our function within the 1%-satrapy is consumers, we earn to consume. It would therefore fit that they would want to control our consuming. BigData's microtargeting is not a fringe activity (rogue Cambridge Analyticia activity) but central to consumer control - look at this detailed description of a particular ad fraud, for example. But if we are discerning and not conditioned then we can be private. This understanding of discernment as privacy is part of a wider umbrella associated with pathtivism development; such development comes from looking at ourselves and how we accept the conditioning that happens. With regards to the internet there isn't sufficient discernment.
However my main concern is not with privacy but with weaponised AI, and a GFP - Gaia First Protocol. If the EU can accept the above 5 principles of GDPR, then why can't we develop 5 GDPR principles as applied to weaponised AI?
Companies are leading the way in weaponised AI research so even though we live in a 1%-satrapy we must try and use whatever government offices to limit the way the 1% will use AI in war.
Companies will have to specify the limits of what the weaponised AI will be used for. These limits will depend on a Gaia First Protocol, noting acceptable environmental damage as well as damage to human life.
Weaponised AI will be data driven so once these limits have been reached it must be a requirement that the weaponised AI be terminated and no further data to be gathered.
Companies have to be transparent as to the limits of their weaponised AI.
If weaponised AI is making automated decisions about use of weaponry, companies must be able to explain the logic behind the decision-making process.
These principles must be hard-wired into the weaponised AI to ensure that they are not susceptible to bad actors whether personal or governmental.
|Books:- Treatise, Wai Zandtao Scifi, Matriellez Education. Blogs:- Matriellez, Zandtao.|