Jump to content

Counter-Disinformation: The New Snake Oil (AI)


Geee

Recommended Posts

Racket

Now a journalist, Naval Special Warfare vet Tom Wyatt examines the life as an anti-disinformation contractor he could have led

 

:snip:

 

Amongst defense heavyweights Lockheed Martin and General Dynamics, was a lesser-known company down in the silver category named Primer. Primer is one of the many artificial intelligence and machine learning-focused companies orbiting the defense industry. Aside from AT&T, the silver sponsors blended together in an indistinguishable list of obscure defense contractors, and perhaps Primer would’ve remained obscure, too, had the company not acquired Yonder, an Austin, Texas-based “information integrity” company.

Primer had already entered the disinformation space, in 2020, when it won a Small Business Innovation Research, or SBIR, contract with the Air Force and Special Operation Command, SOCOM, to develop the first machine learning platform to automatically identify and assess suspected disinformation. This evolution into the disinformation world was fully realized with its 2022 acquisition of Yonder, an “information integrity” company focused on detecting and disrupting disinformation campaigns online.

Yonder, originally New Knowledge, rose to prominence when they co-authored a report to the Senate Intelligence Committee on Russian influence campaigns leading up to the 2016 Presidential election. Ironically, New Knowledge’s own foray into election meddling would make them a household name. During the 2017 Alabama Senate race, New Knowledge’s CEO, Jonathon Morgan, created a fake Facebook page and Twitter “botnet” with the intent of persuading votes for the Democratic candidate. 

“We orchestrated an elaborate ‘false flag’ operation that planted the idea that the Moore campaign was amplified on social media by a Russian botnet,” said an internal document from Morgan’s project. :snip:

Link to comment
Share on other sites

Medical AI's weaponization

Machine learning can bring us cancer diagnoses with greater speed and precision than any individual doctor — but it could also bring us another pandemic at the hands of a relatively low-skilled programmer.

Why it matters: The health field is generating some of the most exciting artificial intelligence innovation, but AI can also weaponize modern medicine against the same people it sets out to cure.

Driving the news: The World Health Organization is warning about the risks of bias, misinformation and privacy breaches in the deployment of large language models in healthcare.

The big picture: As this technology races ahead, everyone — companies, government and consumers — has to be clear-eyed that it can both save lives and cost lives.

What’s happening: AI in health is delivering speed, accuracy and cost dividends — from quicker vaccines to helping doctors outsmart killer heart conditions.:snip:

  • Like 1
Link to comment
Share on other sites

Generative AI tools like ChatGPT could test bounds of tech liability shield

Generative artificial intelligence (AI) tools are testing a provision that protected the tech industry for decades from lawsuits over third-party content.

As applications like ChatGPT and rival products rise in popularity, experts and stakeholders are split on whether and how Section 230 of the Communication Decency Act — a liability shield for internet companies over third-party content — should apply to the new tools. 

Ashley Johnson, a senior policy analyst at the Information Technology and Innovation Foundation said the “most likely scenario” is if generative AI is challenged in court, it “probably won’t be considered covered by Section 230.” 

“It would be very difficult to argue it is content that the platform, or service, or whoever is being sued in this case had no hand in creating if it was their AI platform that generated the content,” Johnson said. 

Even Sam Altman, the CEO of OpenAI, the company behind ChatGPT, testified during a Senate hearing last week that Section 230 is not the “right framework” for tools like his company has put out. 

“We’re claiming we need to work together to find a totally new approach,” Altman said. :snip:

Link to comment
Share on other sites

The Pentagon explosion that wasn’t shows perils of polluted information ecosystem

 

REAL FAKE NEWS: It was a beautiful spring Monday in Washington when officials and reporters in the Pentagon began to be barraged with questions about a reported explosion outside the building, which according to a photo that quickly went viral, was sending a massive plume of black smoke into the air. It was quickly revealed to be a hoax.

“There's no explosion or fire at or near the Pentagon-apparently these are false reports circulating on social media,” tweeted Voice of America correspondent Carla Babb. “People inside the building have absolutely no idea about any explosion.”

:snip:

 

THE MEANINGLESS BLUE CHECK: The fake image, which many posited was created by artificial intelligence, hit the internet just after 9:30 a.m. when the stock market opened, triggering computerized algorithmic trading, which tracks news events, and causing a brief dip in the S&P 500 before the fake photo was was debunked.

Many observers speculated that the hoax was an attempt to manipulate the market just enough to make a profit on the small 0.3% ripple.

:snip:

Link to comment
Share on other sites

  • Geee changed the title to Counter-Disinformation: The New Snake Oil (AI)

The Clear and Present AI Danger

Does artificial intelligence threaten to conquer humanity? In recent months, the question has leaped from the pages of science fiction novels to the forefront of media and government attention. It’s unclear, however, how many of the discussants understand the implication of that leap.

In the public mind, the threat either focuses narrowly on the inherent confusion of ever-better deep fakes and its consequences for the job market, or points in directions that would make a great movie: What if AI systems decide that they’re superior to humans, seize control, and put genocidal plans into practice? That latter focus is obviously the more compelling of the two.

 

 

While such a nightmare may be possible in theory, it’s remote. The clear and present danger that AI poses will destroy us long before our rebellious automated servants declare themselves our exterminationist overlords. Two critical words summarize the threat: values and authority.

Take values first. For all their sophistication and mystery, AI systems are basically pattern detectors. Nearly all human behavior—and an even larger share of non-human occurrences—follows predictable patterns. Increasingly sophisticated recording mechanisms provide AI systems with a growing body of past data in which to find patterns. Increasingly sophisticated algorithms provide AI systems with rapidly improving capabilities to find the patterns in those data.

AI becomes interesting, however, only when it projects those patterns into the future. Few care, for example, that AI can find patterns in Shakespeare, though many will be fascinated when AI composes a “Shakespearean tragedy” set in the American Civil War using language, style, idiom, and skills previously considered unique to the Bard.:snip:

  • Like 1
Link to comment
Share on other sites

  • 2 weeks later...
  • 2 weeks later...

China’s Burgeoning AI Supremacy Threatens American Stability

The U.S. and China are in a digital arms race to determine the future of AI. If we lose, the results could be disastrous.

 

 

During his opening remarks before the U.S. Senate Judiciary Committee’s Subcommittee on Privacy, Technology, and the Law, Sam Altman, the chief executive of the artificial intelligence start-up OpenAI, confirmed what many-a-Luddite have long feared: AI is here to stay, and its proliferation is unavoidable.

Expressing his belief that AI could be used to empower humanity but without guide rails, Altman said its unregulated use could wreak havoc: “OpenAI was founded on the belief that artificial intelligence has the potential to improve nearly every aspect of our lives, but also that it creates serious risks.”:snip:

Link to comment
Share on other sites

Increased use of AI on the job shows disturbing health trend, study finds

Frequent use of AI by employees at work could have 'potentially damaging mental and physical impacts' on workers, a study found

 

 

People who work closely alongside artificial intelligence are more likely to experience loneliness, binge drinking and insomnia than colleagues who work alongside humans, according to a new study.  

 

The release of ChatGPT last year opened the floodgates to artificial intelligence, as people across the globe rushed to use the chatbot, which can mimic human conversations, while some industries readied to incorporate the technology into day-to-day tasks.:snip:

Link to comment
Share on other sites

Jun 19, 2023
We've done a number of shows about the danger of AI -- mostly 'hard,' or self-aware AI. However, like fire, Artificial Intelligence, makes a terrible master but a useful servant. Case in point: a recently-discovered, barely-audible song that John Lennon recorded directly into a hand-held cassette player before his death in 1980 has been processed by an artificial intelligence algorithm that was 'trained' to replicate Lennon's voice, turning a barely-audible mess into a studio-quality track. Now, 23 years after their last group recording, The Beatles are about to release a new song!

  • Like 1
Link to comment
Share on other sites

'Decisive actions' on AI coming in next few weeks, White House says

The White House said Tuesday it will soon take "decisive actions" to get ahead of the rapid advancement of AI technology.

 

"The White House Chief of Staff office is overseeing a process to rapidly develop decisive actions we can take over the coming weeks," a White House official said.

"White House principals have met to discuss this issue 2-3 times a week in addition to ongoing daily work being done across the White House and agencies," the official added. "White House officials are also working on securing commitments from leading AI companies to combat challenges from the government and the private sector side.":snip:

Link to comment
Share on other sites

  • 2 weeks later...
2 hours ago, Geee said:

Biden administration pushing to make AI woke, adhere to far-left agenda: watchdog

Top Biden officials 'rigging' AI systems to promote leftist ideas, group claims

Quote

"Biden is being advised on technology policy, not by scientists, but by racially obsessed social academics and activists. We're already seen the biggest tech firms in the world, like Google under Eric Schmidt, use their power to push the left's agenda. This would take the tech/woke alliance to a whole new, truly terrifying level."

This pretty much sums up The Entire Biden Administration.

Link to comment
Share on other sites

  • 2 weeks later...

tential AI revolution puts 27% of jobs at high risk, report says

Approximately 27% of jobs are at high risk of automation amid the artificial intelligence revolution, according to a new report. 

The Organization for Economic Co-operation and Development (OECD), a global policy forum that includes 38 members, said in a Tuesday report that high-skill occupations are still at least risk of automation. 

Low-to-middle skilled jobs are most at risk, including construction, farming, fishing, forestry and, to a lesser extent, production and transportation, the report said.

The bloc said that while adoption of AI is still relatively low, rapid progress, falling technology costs and the increasing availability of workers with AI skills suggest that OECD countries may be on the brink of an AI revolution.:snip:

Link to comment
Share on other sites

The Jokes Write Themselves When Kamala Harris Tries to Explain Artificial Intelligence

:snip:I think the first part of this issue that should be articulated is AI is kind of a fancy thing. First of all, it’s two letters.

It means artificial intelligence, but ultimately what it is, is it’s about machine learning.:snip:

 

:snip:And so, the machine is taught — and part of the issue here is what information is going into the machine that will then determine — and we can predict then, if we think about what information is going in, what then will be produced in terms of decisions and opinions that may be made through that process.

So to reduce it down to its most simple point, this is part of the issue that we have here is thinking about what is going into a decision, and then whether that decision is actually legitimate and reflective of the needs and the life experiences of all the people.:snip:

Link to comment
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now
  • 1722059142
×
×
  • Create New...