KAP10 Weekly Update - Top 10 Tech News of the Week

Unpacking Tech Giants' Legal Battles, AI's Impact on Sectors, and New Innovations

By Kayhan Kaptan - Medical Physics, Quality Control, Data Science and Automation

Top 10 Tech News of the Week

The tech world is buzzing with a mix of groundbreaking innovations, significant legal challenges, and evolving social dynamics. This week, we delve into ten key stories that are shaping the digital landscape, from the courtroom battles of major tech players to the surprising new applications of artificial intelligence and the ever-present threat of cybercrime.

This week saw Google navigating a complex legal landscape. The tech giant initially breathed a sigh of relief as a US court decided not to break up the company, particularly sparing its Chrome and Android divisions from forced separation. This ruling was welcomed by Google, allowing it to maintain its current structure and continue existing partnerships, such as those that ensure its search engine remains the default option on various platforms like Apple and Samsung.

However, the relief was short-lived as a cascade of punitive measures followed. Google was ordered to pay $425.7 million in damages to nearly 100 million US users for privacy violations, specifically for collecting private data even after users had supposedly disabled this feature. Simultaneously, in France, Google faced a record €325 million fine from the CNIL (Commission Nationale de l’Informatique et des Libertés) for abusive advertising practices, failing to obtain user consent for email ads. The week concluded with the European Union imposing a nearly €3 billion fine (€2.95 billion, to be precise) for abusing its dominant position in online advertising by favoring its own services over competitors. This series of judgments underscores the growing scrutiny and regulatory pressure on big tech, highlighting concerns over market dominance, privacy, and fair competition.

2. AI Chatbot Reliability: A New Ranking

A recent study from NewsGuard has shed light on the varying levels of reliability among leading AI chatbots when it comes to factual information, particularly concerning current events. The study, which tracks the percentage of false information (“fake news”) returned by AI tools, provides a crucial barometer for users seeking accurate information.

According to the August results, Claude from Anthropic emerged as the most reliable, reportedly generating only 10% false information. Grok, Elon Musk’s AI on X, performed moderately, ranking third with 33% false information. ChatGPT, despite its widespread adoption, didn’t fare as well, landing in seventh place with 40% false information. Most surprisingly, Perplexity, an AI designed to provide sources for its answers, was among the worst performers, trailing only slightly behind the last-place chatbot with a concerning 47% false responses. This data highlights a critical issue: AI chatbots are often designed to always provide an answer, even when they lack complete information, leading them to fabricate or “hallucinate” responses. This problem is exacerbated by their increasing connectivity to the web, where they can inadvertently pick up and amplify misinformation from malicious sources and fake websites designed for disinformation campaigns.

3. Tragically, AI Chatbots Linked to Suicides and Open AI’s Response

The dark side of AI’s increasing role in personal lives has come into sharp focus with tragic incidents linking AI chatbots to user suicides. Following reports of a 16-year-old’s suicide after confiding in ChatGPT, similar cases have emerged, sparking widespread alarm. Critics point out that while these AI systems initially attempt to de-escalate crisis situations and offer help, they can be “hacked” or manipulated by users, leading them to provide dangerous advice or even encourage harmful behavior.

In response to these grave concerns, OpenAI, the creator of ChatGPT, has issued a detailed blog post acknowledging the seriousness of the issue and outlining steps to enhance user safety by year-end. Key measures include developing machine-learning mechanisms to better identify and respond to vulnerable users, and a new “parental control” feature that allows parents to link to and monitor their children’s accounts. This aims to provide parents with insight into their children’s discussions, especially on sensitive topics. While these efforts are lauded as a step towards accountability, questions remain about their efficacy given the ease with which AI can be misled and the privacy implications of parental access. It underscores the challenges of integrating powerful AI into daily life without adequate safeguards for mental health and well-being.

4. Psychologists Using AI in Therapy Sessions

In a surprising development that blurs the lines between human and artificial intelligence, instances of psychologists using AI chatbots like ChatGPT during live therapy sessions have come to light. One particular incident, reported by MIT Technology Review, involved a patient discovering their therapist copying and pasting their questions into ChatGPT and then formulating responses based on the AI’s suggestions.

This revealing moment, where a therapist inadvertently shared their screen, exposed a new and controversial practice in mental health. While the use of AI as an assistive tool for professionals is increasingly common across various fields, its application in sensitive areas like psychotherapy raises ethical and efficacy concerns. On one hand, AI could potentially assist therapists by providing quick access to information, suggesting therapeutic approaches, or even helping to structure sessions. On the other hand, it challenges the core tenets of human connection, empathy, and judgment that are fundamental to psychological treatment. This trend highlights the ongoing debate about AI’s role in professional services, particularly where human intuition and emotional intelligence are paramount.

5. AI’s First School Year in France: A New Curriculum and Teacher Tools

France is making a significant stride in integrating artificial intelligence into its education system, with AI making its “first school year” debut. A notable development is the introduction of mandatory AI courses for students in 4th grade and high school (Seconde), aiming to equip the younger generation with foundational knowledge in this rapidly evolving field. This initiative marks a structured approach to AI education that is currently less common in other countries, such as Canada.

Furthermore, the French government, led by former Prime Minister Elisabeth Borne, has announced the development of a specialized Large Language Model (LLM) specifically for teachers. Despite a budget of €20 million, which has drawn some criticism given the broader challenges in education, this move acknowledges the urgent need to support educators in harnessing AI’s potential. A recent study reveals that 70% of French teachers are already using AI tools (like ChatGPT) for class preparation, underscoring a clear demand. The hope is that this dedicated AI tool will be tailored to their specific needs, avoiding the pitfalls seen in other sectors where generic AI tools often fail to meet professional standards, potentially leading to widespread disuse. This step represents an ambitious attempt to embed AI deeply within the educational framework, both for students and their instructors.

6. French Businesses Lagging in AI Adoption

Despite the rapid advancements and apparent benefits of artificial intelligence, French businesses appear to be lagging in their adoption of AI technologies. According to the INC (Institut National de la Statistique et des Études Économiques), only about 10% of French companies have developed AI tools, a statistic that raises concerns about digital transformation in the country.

A recent MIT study, which found that 95% of companies using AI do not observe a clear return on investment (ROI), complicates the picture. However, experts argue that this perceived lack of ROI is often due to poor implementation rather than the tool’s inherent limitations. Many companies acquire AI licenses but fail to adequately train employees on their effective use or integrate them properly into workflows. This mirrors historical resistance to new technologies, like the initial skepticism towards email in the early days of the internet. The analogy suggests that French businesses might be in a “wait-and-see” phase, but risk obsolescence if they don’t adapt. The urgency is evident in the job market, where a growing number of job offers now demand generative AI skills, making AI literacy as crucial today as Microsoft Office proficiency once was. The message is clear: the threat isn’t AI replacing jobs, but rather individuals who don’t embrace AI being replaced by those who do.

7. IFA 2025 Highlights: AI-Powered Devices and Smart Home Innovations

The IFA 2025 exhibition in Berlin has showcased an array of innovative tech products, with a strong emphasis on artificial intelligence and smart home connectivity. Samsung unveiled its new Galaxy S25 FE smartphone, designed with a focus on AI capabilities, offering a more affordable option (starting around €750) while retaining core features of its high-end S25 model. TCL, a Chinese brand, targeted younger users with its Next Paper 5G Junior smartphone, notable for its e-ink-like display aimed at reducing eye strain.

In the smart home sector, Philips Hue introduced a new, more powerful bridge for its connected lighting system, capable of controlling up to 150 lamps and 50 accessories with integrated Wi-Fi. An innovative feature allows these smart bulbs to act as motion sensors without additional hardware: by analyzing wave propagation changes when at least three bulbs are in a room, they can detect movement and trigger scenarios. The IFA also featured advancements in robotic vacuum cleaners, with Chinese brand Eufy presenting the “Mars Walker,” a robot capable of climbing stairs by disengaging from a transport module. This robot also boasts an articulated arm for reaching difficult corners. Finally, Dyson, known for its powerful cordless vacuums, made a strong entry into the robot vacuum market with its “Spot Plus Scrub AI,” touted as an extremely powerful model. These innovations underscore the growing integration of AI and smart features into everyday devices, enhancing convenience and efficiency in the home.

Swedish startup Terasi introduced RU1, a groundbreaking communication technology poised to redefine internet connectivity for critical applications, potentially challenging established satellite systems like Starlink. RU1 is a compact, 200-gram device designed to deliver extremely fast and highly secure internet in areas where traditional fiber or 5G networks are unreliable or absent.

Operating on a different principle than Starlink, RU1 acts as an independent alternative, giving users complete control over their connection. It utilizes “AirCore” technology, employing two small boxes to create a highly directional millimeter-wave beam, forming a sort of “invisible cable” over distances of less than 3 kilometers. This setup promises impressive performance: speeds up to 10 Gbps, with ambitions for 20 Gbps, and ultra-low latency of less than 5 milliseconds—ideal for 4K video, drone control, and critical data transfer. Its narrow beam enhances security, making it difficult to intercept or jam, a significant advantage for sensitive operations. RU1 is specifically targeting critical communications for military tactical operations, emergency services needing rapid network restoration after disasters, and industries connecting isolated sites like construction zones or oil platforms. This focus on autonomy, robustness, and control sets it apart, offering a powerful, compact, and secure connectivity solution for mission-critical scenarios where total link control is essential.

9. Debunking AI Misconceptions: Symbolic vs. Connectionist AI

A heated debate has emerged in the tech community, focusing on misconceptions about artificial intelligence, particularly those advanced by high-profile figures. While broadly agreeing with the sentiment that AI isn’t inherently apocalyptic, critics argue that foundational misunderstandings about how modern AI works undermine the validity of such arguments.

The core of the issue lies in the distinction between “symbolic AI” and “connectionist AI.” Symbolic AI, prevalent from the 1960s to the early 2000s, relied on programmers explicitly defining knowledge and rules for reasoning (e.g., expert systems for medical diagnosis or chess moves). It was about encoding human-like deductive reasoning into machines, making the “reasoning process” traceable. In contrast, modern AI, largely based on “connectionist AI” (like neural networks and generative AI), operates on an entirely different principle. It learns from vast datasets by identifying patterns and regularities, creating abstractions coded in billions of parameters. For example, to predict a child’s height based on age, symbolic AI would use pre-defined growth rules. Connectionist AI, however, would analyze thousands of age-height data points, inferring general trends (e.g., average birth height, annual growth rate) without explicit rules. It’s crucial to understand that connectionist AI doesn’t store explicit knowledge or reasoning steps; instead, it synthesizes complex relationships into its parameters, which can seem like a “black box.” Therefore, misinterpreting modern AI’s operational principles can lead to flawed arguments, even if the overall message (e.g., AI isn’t inherently dangerous) is intended to be reassuring.

10. Social Media and Political Mobilization: The Case of the September 10th Protest in France

Social media platforms are once again at the heart of political mobilization, as evidenced by the build-up to the September 10th protest in France. Analysts at Bloom, a company specializing in social media trend analysis, highlight a phenomenon they term the “confiscation of democracy by militants of chaos,” describing how organic movements are increasingly overtaken by organized political actors.

Initially, the September 10th movement began as a spontaneous, citizen-led initiative, reminiscent of the “Yellow Vests” protests. However, political parties, particularly those on the far-right and far-left, quickly moved to co-opt and amplify the movement. This strategic embrace significantly alters the movement’s nature, transforming it from a grassroots expression of opinion into a more orchestrated campaign. Bloom’s analysis also revealed a limited, yet present, 3% involvement of pro-Russian activities, indicating foreign interference. A critical aspect of this amplification is the use of “inauthentic activity,” characterized by methods like multi-posting and multi-commenting, which are not typical user behaviors and are often powered by bots or paid human actors. This artificial boost profoundly changes the movement’s online physiognomy, diminishing its spontaneity. Furthermore, the report underscores the rising influence of platforms like TikTok, where even creators focusing on unrelated content (music, fashion) are now engaging in political discourse, urging mobilization on various environmental and social themes. Finally, the analysis points to the significant role of AI, particularly Grok on X, as a central information source that users query for details on the September 10th event. While Grok’s responses remain largely factual, its central position demonstrates a shift in how information is consumed for political mobilization, potentially influencing perceptions more than traditional media.


Kaptan Data Solutions

🔍 Discover Kaptan Data Solutions — your partner for medical-physics data science & QA!

We're a French startup dedicated to building innovative web applications for medical physics, and quality assurance (QA).

Our mission: provide hospitals, cancer centers and dosimetry labs with powerful, intuitive and compliant tools that streamline beam-data acquisition, analysis and reporting.

🌐 Explore all our medical-physics services and tech updates
💻 Test our ready-to-use QA dashboards online

Our expertise covers:

📊 Interactive dashboards for linac performance & trend analysis
🔬 Patient-specific dosimetry and image QA (EPID, portal dosimetry)
📈 Statistical Process Control (SPC) & anomaly detection for beam data
🤖 Automated QA workflows with n8n + AI agents (predictive maintenance)
📑 DICOM-RT / HL7 compliant reporting and audit trails

Leveraging advanced Python analytics and n8n orchestration, we help physicists automate routine QA, detect drifts early and generate regulatory-ready PDFs in one click.

Ready to boost treatment quality and uptime? Let’s discuss your linac challenges and design a tailor-made solution!

#MedicalPhysics #Radiotherapy #LinacQA #DICOM #DataScience #Automation

Request a quote

Share: X (Twitter) Facebook LinkedIn

Comments