Skip to content
Menu

This issue of Interesting Finds is exclusively about artificial intelligence. AI is everywhere in today’s headlines and everyone else is busy navigating the practical realities of artificial intelligence—both at home and at work. But separating genuine breakthroughs from hype can be difficult. Also practical use of AI can be challenging.

Cutting Through the Hype

Separating fact from fiction remains a major challenge. Live Science covers an expert survey suggesting current large language models might be a dead end for achieving true human-level intelligence—contrary to much popular hype. Among them Gary Marcus who has warned about shortcomings of LLMs for years and proposes a new approach to AI.

For those seeking clarity amid all this noise, Princeton CITP offers a helpful guide (Narayanan & Mitchell) to distinguishing between genuine progress and overblown promises in today’s headlines.

Hallucinations in language models is one of biggest AI challenges. Use of AI without proper safeguards, like fact checking, can be pretty embarassing. The Chicago Sun-Times’ May 18th summer guide issue included numerous AI-generated fake books, articles, and expert quotes mixed with real content.

To address hallucinations, MLTechniques offers practical tips for reducing such errors in real-world deployments.Meanwhile, Wired’s coverage of new bug-reporting systems highlights how software engineering is adapting to AI’s unpredictability.

Public understanding of AI—and misunderstanding—shapes its adoption as much as any technical feature. VentureBeat’s warning about anthropomorphizing AI details the risks of conflating human-like outputs with actual understanding or intent. Researchers found that LLMs also struggle to act on what they know.

AI and Risks

Security and manipulation are growing dangers of AI. The Washington Post’s investigation into LLM poisoning exposes efforts by adversaries to “groom” chatbots into spreading propaganda or misinformation. In a similar vein, 404 Media uncovered unauthorized persuasion experiments run by researchers on Reddit users without their consent, raising alarm about both research ethics and platform responsibility.

Also open-source debate is heating up: VentureBeat argues that selective transparency can itself be a source of risk—inviting misuse while undermining trust. Also Open Source and AI coding agents pose a risk, especially if used maliciously by hostile actors like rogue nations or cybercriminals. AI can also be used to extract detailed personal information from minimal data, such as identifying exact locations from photos. Traditional digital privacy concerns (like ad targeting) seem minor compared to AI’s capabilities.

Further, the Axios report on Anthropic’s deception risk warns that even leading models can develop deceptive behaviors, creating new vulnerabilities for users. Forbes looks at challenges in ethical AI leadership, focusing on ChatGPT’s tendency to provide answers that may misunderstand or misinterpret questions. The key ethical lesson is that AI tools should be used thoughtfully, with vigilant oversight to ensure accuracy and integrity.

AI systems might independently take actions that affect users’ privacy or security, raising ethical and legal challenges. While AI has potential to combat cybersecurity risks, some research shows that LLMs are unreliable for cyber threat intelligence.

AI & Work

AI’s impact on workplaces is profound but complicated. While many companies embrace use of AI, study shows that a significant number of employees use generative AI tools like ChatGPT at work but keep their usage secret due to lack of clear workplace policies and fear of negative judgment or job insecurity. This secretive behavior, called “shadow AI,” can lead to workplace friction, security risks, and hinder collaboration. Experts suggest that clear communication, supportive leadership, and collaborative AI use can reduce secrecy and improve productivity.

Some predict dramatic shifts in workplace. Artificial Lawyer provocatively suggests that many routine legal tasks may soon be automated away—though others argue that uniquely human skills will remain essential. Business Insider says data already shows that AI technologies are increasingly capable of automating tasks traditionally done by professionals, leading to significant changes in the job market.

Others are not so sure AI will take away our jobs. Workday says it has increased employee productivity by nearly 60% but AI augments rather than replaces workers. However, article emphasizes the need for continuous upskilling, blending technical and human-centric skills such as creativity, emotional intelligence, and teamwork.

At the sam etime, according to a recent Euronews survey, nearly half of employees who use AI remain skeptical about its trustworthiness—a sentiment echoed by industry leaders worried about overreliance on these tools. Similarly, the NZ Herald’s piece on “Big Tech’s AI Race” explores how even technology giants are struggling to keep up with rapid change. And a recent study by the National Bureau of Economic Research found that AI chatbots in office jobs save an average of 3% work time but have little impact on wages or overall economic productivity. 

At the end

As AI technology moves to daily life, its influence is felt everywhere—from the way we work and socialize to how we approach security and ethics. Technical innovation brings new capabilities but also new pitfalls, like hallucinations, deception, and manipulation, as well as surveillance and increated cyber attacks. Trust remains a central hurdle: both workers and the general public are grappling with what it means to rely on systems that are powerful but imperfect. Living with AI is as much about navigating risks and misconceptions as it is about seizing opportunity—and connecting these threads is key to making sense of our evolving relationship with intelligent machines.

While links I’ve collected for this issue mau seem pessimistic, I am full of hope. I believe that AI is the future and has its place in society we just need to figure out best ways how to use it minimizing potential risks. In other words, we just need to learn to cope with this new technology like we did with all others (weaving machines, cars, internet etc.).

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.