Connect with us


Google Allegedly Pushed Scientists to Cast Artificial Intelligence in Positive Light

Makes you wonder, doesn’t it?

In what could be a major sign of the times that we’re living in, scientists at Google are being instructed to skew their views of artificial intelligence in order to behoove the technological giant.

Google is very much in the business of innovation, particularly as it pertains to algorithmic mapping and how best to apply the practice to the human mind.  The internet Goliath would like nothing more than to predict just what you’re going to be thinking about, so that it can better serve the advertisers who rely on them, thus creating a larger online footprint for Google.

What better way to do this than with artificial intelligence?  The only problem is that we have a healthy mistrust of this sort of technology thanks to the very nature of the concept.

It appears as though Google is looking to assuage our fears, but for what purpose?

Alphabet Inc’s Google this year moved to tighten control over its scientists’ papers by launching a “sensitive topics” review, and in at least three cases requested authors refrain from casting its technology in a negative light, according to internal communications and interviews with researchers involved in the work.

Google’s new review procedure asks that researchers consult with legal, policy and public relations teams before pursuing topics such as face and sentiment analysis and categorizations of race, gender or political affiliation, according to internal webpages explaining the policy.

“Advances in technology and the growing complexity of our external environment are increasingly leading to situations where seemingly inoffensive projects raise ethical, reputational, regulatory or legal issues,” one of the pages for research staff stated. Reuters could not determine the date of the post, though three current employees said the policy began in June.

But there’s more…

Tensions between Google and some of its staff broke into view this month after the abrupt exit of scientist Timnit Gebru, who led a 12-person team with Mitchell focused on ethics in artificial intelligence software (AI).

Gebru says Google fired her after she questioned an order not to publish research claiming AI that mimics speech could disadvantage marginalized populations. Google said it accepted and expedited her resignation. It could not be determined whether Gebru’s paper underwent a “sensitive topics” review.

Google Senior Vice President Jeff Dean said in a statement this month that Gebru’s paper dwelled on potential harms without discussing efforts underway to address them.

If Google is manipulating the science regarding artificial intelligence, one can only imagine that such a move would behoove their future endeavors.  And, in a field as gargantuan as AI, there is no reason to ignore the potential for malfeasance by Google.

Become an insider!

Sign up for our free email newsletter, and we'll make sure to keep you in the loop.

Join the conversation!

We have no tolerance for comments containing violence, racism, profanity, vulgarity, doxing, or discourteous behavior. If a comment is spam, instead of replying to it, please mark it as spam. Thank you for partnering with us to maintain fruitful conversation.

INNER TURMOIL: Biden Advisers Reportedly Unsure of President’s Ability to Campaign


COMMON SENSE: Bipartisan Bill Looks to Keep A.I. from Running Nuke Security


Oregon Grants Homeless the Right to Sue for ‘Harassment’


DeSantis Caught Trying to Poach Trump Donors During Overseas Trip