spot_img
Home Blog

Meta AI searches made public – but do all its users realise?

0

How would you feel if your internet search history was put online for others to see?

That may be happening to some users of Meta AI without them realising, as people’s prompts to the artificial intelligence tool – and the results – are posted on a public feed.

One internet safety expert said it was “a huge user experience and security problem” as some posts are easily traceable to social media accounts.

This means some people may be unwittingly telling the world about their searches – such as asking the AI to generate scantily-clad characters or help them cheat on tests.

Meta says chats are private by default, and if users make a post public they can choose to withdraw it later.

Before a post is shared, a message pops up which says: “Prompts you post are public and visible to everyone… Avoid sharing personal or sensitive information.”

However – given the private nature of some of the queries – it is not clear if the users understand their searches are being posted into a public “Discover” feed on the Meta AI app and website, and that these could be traced to their other social accounts through usernames and profile pictures.

The BBC found several examples of people uploading photos of school or university test questions, and asking Meta AI for answers.

One of the chats is titled “Generative AI tackles math problems with ease”.

Another user’s conversation which was posted publicly was about them exploring questions around their gender and whether they should transition.

There were also searches for women and anthropomorphic animal characters wearing very little clothing.

One search, which could be traced back to a person’s Instagram account because of their username and profile picture, asked Meta AI to generate an image of an animated character lying outside wearing only underwear.

‘You’re in control’

Meta AI, launched earlier this year, can be accessed through its social media platforms Facebook, Instagram and Whatsapp.

It is also available as a standalone product which has a public “Discover” feed.

Users can opt to make their searches private in their account settings.

Meta AI is currently available in the UK through a browser, while in the US it can be used through an app.

In a press release from April which announced Meta AI, the company said there would be “a Discover feed, a place to share and explore how others are using AI”.

“You’re in control: nothing is shared to your feed unless you choose to post it,” it said.

But Rachel Tobac, chief executive of US cyber security company Social Proof Security, posted on X saying: “If a user’s expectations about how a tool functions don’t match reality, you’ve got yourself a huge user experience and security problem.”

She added that people do not expect their AI chatbot interactions to be made public on a feed normally associated with social media.

“Because of this, users are inadvertently posting sensitive info to a public feed with their identity linked,” she said.

The State of Generative AI 2025: Breakthroughs, Business Shifts, and the Race for AI Talent

0

Generative AI: The Rapidly Evolving Tech That’s Reshaping Our World

In a world where technology moves faster than ever, generative AI has emerged as one of the most powerful forces shaping industries, transforming the workplace, and even changing how we learn and interact with machines. From text and image creation to advanced research tools and voice assistants, generative AI is making headlines almost daily — and for good reason. The impact is widespread, and staying informed is no longer optional.

Here’s a round-up of the latest developments and trends in the generative AI space across tools, business, and technology.

Generative AI Tools – Key Updates

February 2025

1. OpenAI Launches GPT-4.5 With Smarter Emotional Intelligence
OpenAI’s latest model, GPT-4.5, brings noticeable improvements in understanding nuance and emotional cues, offering smoother, more intuitive interactions. It’s designed to align more closely with user intent and provide more reliable outputs, making it ideal for creative tasks like writing, design, and problem-solving.

2. Alexa’s Next Evolution in Conversational AI Is Coming
Amazon is gearing up to unveil a major upgrade to Alexa during its February 26 event. This next-gen version is expected to deliver a more conversational and capable assistant, thanks to advanced generative AI. Expect smarter home control and deeper natural language interaction — a direct move to stay competitive with Google and OpenAI.

January 2025

1. DeepSeek V3 – A Strong Open Challenger
DeepSeek released version 3 of its model, claiming it outperforms Meta’s Llama 3.1 and even OpenAI’s GPT-4 in certain coding tasks. The standout? It was trained faster and more affordably. DeepSeek is also opening access to this model, encouraging broader use across development and enterprise applications.

They didn’t stop there. DeepSeek also introduced Janus Pro 7B, an open-source image generator said to outperform OpenAI’s DALL·E 3.

December 2024

1. Google DeepMind Rolls Out Gemini 2.0
The latest Gemini model can generate and understand text, images, and audio. It introduces “Deep Research,” a tool designed for complex reasoning and report generation — effectively acting as an AI research assistant.

2. Google Debuts Veo, Its Video-Generation Model
Veo is Google’s answer to OpenAI’s Sora. With a focus on creativity and high fidelity, the model shows how fast video-generation capabilities are evolving in generative AI.

3. OpenAI Experiments With AI-Driven Education
OpenAI is developing a new way to deliver online education by pairing AI chatbots with digital courses. The goal is to create an interactive learning experience that adapts to the student, offering real-time assistance and deeper engagement.

November 2024

1. Perplexity AI Turns Into a Shopping Assistant
Perplexity AI now lets users shop directly through its platform, with AI helping to interpret user intent and deliver tailored recommendations. It’s a glimpse into what personalized e-commerce might look like in the near future.

2. Google Maps Gets Smarter With Gemini AI
Google Maps is integrating its Gemini model to improve user experience. Instead of standard queries, users can ask for contextual suggestions like “fun things to do with friends tonight,” and get curated, local results — elevating Maps beyond basic navigation.

AI in Business and the Workplace

May 2025

AI Boosts Performance — But Motivation May Suffer
A Harvard Business Review study of over 3,500 workers found that while generative AI tools improve productivity, they can also reduce motivation and engagement. The researchers suggest that leaders need to rethink how work is structured to maintain a sense of purpose and fulfillment alongside AI use.

March 2025

Heavy GenAI Use May Dull Critical Thinking
A Microsoft and Carnegie Mellon study looked at how generative AI tools impact decision-making. Professionals who use tools like ChatGPT and Copilot weekly were found to be more likely to rely on AI output without critically evaluating it — potentially weakening independent problem-solving over time.

January 2025

Creative and Coding Jobs See Decline Due to AI
A Harvard Business Review report noted a 30% reduction in roles involving writing, coding, and image generation. The shift underscores the importance of complementary skills like creativity, problem-solving, and strategic thinking — areas where humans still have the upper hand.

Executives Shift From Pilots to Performance
According to NTT Data, nearly 90% of senior decision-makers are done experimenting with AI and are now focusing on real-world applications that drive efficiency, revenue, and ROI. It’s no longer about “trying AI” — it’s about using it to deliver measurable value.

November 2024

AI Skills Now Outrank Traditional Experience in Hiring
Microsoft and LinkedIn’s 2024 Work Trend Index revealed that 71% of business leaders would hire a less experienced candidate with generative AI skills over a more seasoned candidate without them. The job market is clearly tilting toward AI fluency.

GenAI Job Mentions Triple in the U.S., Surge Globally
Data from Indeed’s Hiring Lab showed that job postings mentioning “GenAI” tripled in the U.S. between September 2023 and September 2024, with even faster growth in other countries. The message is clear: demand for GenAI talent is global — and it’s growing fast.

JPMorgan CEO Envisions a 3.5-Day Workweek Thanks to AI
Jamie Dimon, CEO of JPMorgan Chase, recently shared a bold prediction: in the near future, people might work just 3.5 days a week and live healthier, longer lives — all thanks to the productivity gains driven by AI.

Nvidia Unveils “Signs” AI Platform to Boost American Sign Language Learning and Accessibility

0
Summary and Rephrasing:
Nvidia, in collaboration with the American Society for Deaf Children and creative agency Hello Monday, has launched an AI platform called “Signs” to teach American Sign Language (ASL), the third most common language in the United States. The platform aims to bridge communication gaps between deaf and hearing communities by providing interactive learning tools and an open-source dataset. “Signs” allows learners to practice signs with the help of a 3D avatar and analyze their movements via webcam for real-time feedback, while signers can contribute by recording words to expand the dataset, targeting 400,000 video clips for 1,000 words. Validated by ASL experts for accuracy, the platform supports language education for families and individuals, with Nvidia planning to leverage the data to develop inclusive AI technologies. The project is evolving to include facial expressions and regional variations, and the dataset will be made publicly available later this year.
 
 
 
 
 
 
 
Keywords:
  • Nvidia
  • Signs
  • American Sign Language (ASL)
  • AI platform
  • Deaf community
  • Accessibility
  • Open-source dataset
  • Language learning
  • Real-time feedback
 
Hashtags:
#Nvidia #SignsAI #ASL #AmericanSignLanguage #AI #Accessibility #DeafCommunity #LanguageLearning #TechForGood
 
 
 
 
 
 
 
 

Microsoft develops AI model for videogames

0
On February 19, 2025, Microsoft announced the launch of a new artificial intelligence model called “Muse,” specifically designed for video game development. This model was developed in collaboration with Ninja Theory, a game development company under Xbox Game Studios, Microsoft’s gaming division. “Muse” aims to assist developers in creating visual elements (such as graphics and scenes) and in-game actions (such as character responses or interactive events), making it an innovative tool in the gaming industry.
Details About “Muse” and How It Works
  • Training Data: “Muse” was trained using data from “Bleeding Edge,” a multiplayer combat game released by Ninja Theory in 2020. The model relies on analyzing gameplay footage and controller inputs to understand the dynamics of three-dimensional worlds and physics within games, enabling it to generate interactive content based on player interactions.
  • Capabilities: “Muse” can produce visual designs for games, such as environments or in-game objects, as well as simulate actions or behaviors in response to player movements. This means it can assist in the ideation phase and accelerate the design process.
  • Objective: According to Microsoft, “Muse” is not merely a tool for creating final content but rather an assistive tool aimed at simplifying complex processes in game development, allowing developers to focus more on creative aspects like storytelling and gameplay experience.
Economic and Industry Context
  • Rising Development Costs: The video game industry is witnessing a continuous increase in development costs, with modern games requiring large teams, advanced technologies, and extended timeframes. “Muse” emerges as a potential solution to reduce these costs by automating certain technical aspects.
  • Decline in Spending on New Games: Amid economic uncertainty, consumers tend to stick to established and familiar titles rather than trying new ones. This situation pressures companies to find innovative ways to develop engaging games at lower costs, which Microsoft aims to achieve through artificial intelligence.
  • Microsoft’s Strategy: This announcement reflects Microsoft’s commitment to integrating AI into its gaming ecosystem, building on previous experiments like the virtual support assistant for Xbox. With “Muse,” the company is moving toward using AI at the core of the creative process.
Ninja Theory’s Stance
Despite the collaboration with Microsoft, Ninja Theory’s studio head, Dom Matthews, emphasized that the studio does not intend to use “Muse” to directly create final content for its games. Instead, it is viewed as a tool for generating ideas and speeding up prototyping, preserving the human touch in their titles, such as the “Hellblade” series.
Reactions and Expectations
  • Optimism: Some believe “Muse” could revolutionize game development by making it more dynamic and responsive, as Microsoft CEO Satya Nadella suggested, comparing its impact to the emergence of models like ChatGPT.
  • Controversy: There are concerns within the gaming industry about the growing reliance on AI, with some developers and players opposing its use for content creation out of fear of losing human creativity. Certain independent studios have even adopted slogans like “No Gen AI” to underscore their rejection of this trend.
The Future
Microsoft has stated that it will leave the decision on how to use “Muse” up to its individual studios, meaning we may see varied applications of this technology in future Xbox games. The model is currently in a research phase (developed by Microsoft Research teams), but it could evolve into a standard industry tool if it fulfills its promises.
 
 
 
 

Smart Scientist Assistant: Revolutionizing Scientific Discoveries with AI

0
Screenshot
Rewritten News: In a groundbreaking move to fast-track scientific progress, Juraj Gottweis, Google Fellow, and Vivek Natarajan, Research Lead, unveiled the “AI Co-Scientist” on February 19, 2025. This cutting-edge multi-agent AI system, powered by Gemini 2.0, acts as a virtual partner for scientists, helping them craft fresh hypotheses and innovative research proposals to supercharge the pace of scientific and biomedical breakthroughs.
In today’s research landscape, scientists grapple with the overwhelming surge of publications and the challenge of blending insights from unfamiliar fields. Enter the “AI Co-Scientist”—a game-changer that taps into the scientific method to spark original ideas. It relies on a team of specialized agents like “Generation,” “Evaluation,” and “Evolution,” working in a self-improving loop to deliver top-notch, novel outputs.
Designed for collaboration, this system lets scientists engage effortlessly—whether by feeding it their own ideas or offering real-time feedback. It also harnesses tools like web searches and advanced AI models to ensure robust, high-quality hypotheses. Already, it’s proven its worth in areas like drug repurposing, pinpointing new treatment targets, and unraveling antimicrobial resistance mechanisms, with lab experiments backing up its predictions.
Keywords and Hashtags: #ArtificialIntelligence #ScientificDiscovery #AICoScientist #Gemini2 #MedicalResearch #Innovation #Technology #DataScience #FutureOfScience

Google’s ‘Career Dreamer’ uses AI to help you explore job possibilities

0

Google is launching a new experiment that uses AI to help people explore more career possibilities. The company announced in a blog post on Wednesday that a new “Career Dreamer” tool can find patterns between your experiences, educational background, skills, and interests to connect you with careers that might be a good fit.

With Career Dreamer, you can use AI to draft a career identity statement by selecting your current and previous roles, skills, experiences, education, and interests. Google notes that you can add this career identity statement to your résumé or use it as a guide for talking points during an interview.

 

Career Dreamer lets you see a variety of careers that align with your background and interests via a visual web of possibilities. If you’re interested in a specific career, you can delve deeper into it to learn more about what it entails.

Image Credits:Google

The tool also lets you collaborate with Gemini, Google’s AI assistant, to workshop a cover letter or résumé and explore more job ideas.

It’s worth noting that unlike popular services like Indeed and LinkedIn, Career Dreamer doesn’t link you to actual job postings. It’s instead designed to help you simply explore different careers in a quick way so you don’t have to conduct a series of different Google Searches to find a fit for yourself.

Career Dreamer is currently only available as an experiment in the United States. It’s unknown when or if Google plans to bring the experiment to additional countries.

“We hope Career Dreamer can be helpful to all kinds of job seekers,” Google wrote in its blog post. “During its development, we consulted organizations that serve a wide range of individuals, such as students navigating their first careers, recent graduates entering the workforce, adult learners seeking new opportunities, and the military community, including transitioning service members, military spouses and veterans. If you’re ready for a career change, or just wondering what’s out there, try Career Dreamer.”

 
TechCrunch Disrupt 2025
From AI and startups to space, fintech, and IPOs—experience game-changing insights across five main stages, breakouts, roundtables, unparalleled networking, and so much more.
San Francisco, CA | October 27-29

REGISTER NOW

Image Credits:Google

In its blog post, Google points to a report from World Economic Forum that states people typically hold an average of 12 different jobs throughout their lives and that Gen Z is expected to hold 18 jobs across six different careers.

Google notes that it can be hard to frame your previous experiences into a cohesive narrative, especially if your career path is less traditional, which is where Career Dreamer can help.

Plus, Google believes that the tool can help people better express how the skills they already have align with other jobs.

 

#Google #CareerDreamer #AI #CareerExploration #JobSeekers #Skills #Résumé #InterviewTips #GeminiAI #CareerChange #TechNews #Innovation #JobMarket #GenZ #TechCrunch2025

Think you can cheat with AI? Researcher creates watermarks to detect AI-generated writing

0
New Technology Uses Digital Watermarks to Detect AI-Generated Text
In an era of rapid advancements in artificial intelligence models like ChatGPT and Google’s Gemini, distinguishing between human-written and AI-generated text has become a daunting challenge for educators and employers. However, Yuheng Bu, an engineering professor at the University of Florida, is pioneering an innovative solution: digital watermarks designed to identify AI-generated content, potentially revolutionizing trust in academic and professional settings.
A study by Peter Scarfe from the University of Reading in the UK revealed that 94% of AI-generated texts went undetected in academic assessments, underscoring the urgent need for more effective detection tools. “With the evolution of large language models, it’s becoming nearly impossible to differentiate human and machine-written text without advanced intervention,” Bu explains. This is where his team’s work comes in, leveraging the University of Florida’s HiPerGator supercomputer to embed invisible signals into AI-generated text, making it easily verifiable.
What sets Bu’s technology apart is its focus on preserving text quality while ensuring the watermark’s resilience against edits like paraphrasing or synonym substitution. “We apply watermarks only to specific high-entropy sections of the text, maintaining its natural flow,” Bu notes. The method also employs a private key system, held by the generating entity (e.g., OpenAI for ChatGPT), enhancing security and making forgery or removal nearly impossible.
While companies like Google DeepMind have developed similar watermarking tools, Bu asserts that his approach excels in text quality and resistance to tampering. Still, a key challenge remains: distributing detection keys to end users like professors, which requires a broader ecosystem for widespread adoption.
Bu has published several papers on the topic, including “Adaptive Text Watermark for Large Language Models” at the ICML 2024 conference, emphasizing that this technology could become a vital tool for ensuring trust and authenticity in the age of generative AI. He envisions its integration into schools and digital platforms to verify content and combat misinformation.
Keywords:
  • Artificial Intelligence
  • Digital Watermarks
  • AI-Generated Text Detection
  • Large Language Models
  • Education
  • Cybersecurity
  • ChatGPT
  • Google DeepMind
Hashtags:
#AI #DigitalWatermark #ArtificialIntelligence #EducationTech #TextDetection #CyberSecurity #ChatGPT #Innovation
 

Chatbots Elevate Doctors’ Clinical Decisions, Study Finds

0
AI chatbot usage and concepts
By Hanae Armitage
A groundbreaking study has revealed that #ArtificialIntelligence-powered #Chatbots are transforming the medical landscape, outshining doctors in tackling complex clinical decisions—until physicians team up with the tech. When supported by #AI, doctors match the chatbots’ prowess, suggesting a future where human expertise and machine intelligence merge for superior healthcare outcomes.
 
The research, led by #JonathanChen, MD, PhD, assistant professor of medicine at #StanfordUniversity, alongside a team of experts, dives into the nuanced world of post-diagnosis care. While chatbots excel at pinpointing diseases—a strength highlighted in a prior October 2024 study published in #JAMANetworkOpen—their ability to guide treatment plans, like deciding when a patient should pause blood thinners before surgery or adjust protocols based on past drug reactions, has been less explored. These scenarios lack clear-cut answers, relying instead on a physician’s seasoned judgment.
 
To test this, Chen’s team pitted a #LargeLanguageModel (LLM) chatbot against 46 doctors armed only with internet searches and medical references, and another 46 doctors paired with the same chatbot. The challenge? Five real-world patient cases demanding #ClinicalManagementReasoning a skill akin to navigating a tricky route on a map app, weighing options like traffic delays or detours. For instance, how should a doctor handle a lung mass discovered by chance? Order a biopsy now, delay it, or dig deeper with imaging? The best choice hinges on patient preferences, follow-up reliability, and contextual clues details that test both human and machine reasoning.
 
The results, published on February 5 in #NatureMedicine, were striking. The chatbot alone outperformed the internet-reliant doctors, checking more boxes on a rigorous rubric crafted by board-certified experts. Yet, when doctors partnered with the chatbot, their performance soared to match the AI’s standalone success. “For years, I’ve believed human-plus-computer beats either one alone,” Chen said. “This study pushes us to rethink what each excels at and how we blend those strengths.” Co-senior author #AdamRodman, MD, from #HarvardUniversity, and co-lead authors #EthanGoh, MD, and #RobertGallo, MD, echoed this vision of synergy.
 
What fuels this doctor-chatbot edge? Does the LLM prompt deeper reflection, or does it uncover options physicians might miss? Chen sees this as a tantalizing question for future research. For now, the findings backed by institutions like #VAPaloAlto, #BethIsraelDeaconess, #Microsoft, and #Kaiser don’t herald an era of “AI doctors.” Instead, they spotlight #Chatbots as trusty allies. “Patients shouldn’t bypass doctors for chatbots,” Chen cautioned. “There’s gold in AI, but also noise. The real skill is knowing what to trust a must in today’s world.”
Funded by the #GordonAndBettyMooreFoundation, #StanfordClinicalExcellenceResearchCenter, and the #VAAdvancedFellowship, this study signals a smarter, not standalone, role for #AI in medicine.

Dublin Launches First-Ever Generative AI Lab for Local Government

0
Dr. Ashish Jha, TCD, Jamie Cudden, DCC, Yvonne Kelly, DCC, Khizer Biyabani, Adapt
 

Dublin is taking the initiative in digital innovation with the launch of Ireland’s first-ever Generative AI Lab focused on local government services. This exciting initiative, a collaboration between Dublin City Council (DCC), the ADAPT Research Ireland Centre, and Trinity Business School, aims to explore how AI technologies can improve the way local governments serve their communities—while making sure the technology is used responsibly and ethically.

So, what exactly is Generative AI? It’s a type of artificial intelligence that can create new content, like text, images, or even solutions to problems, by learning from large amounts of data. You’ve likely already encountered this technology in everyday life, from chatbots helping with customer service to AI tools solving complex issues in various industries.

The new Gen-AI Lab will serve as a hub for cutting-edge research, testing, and implementing these AI technologies in a way that benefits Dublin’s public services. The goal is to ensure AI is used in a way that is ethical, transparent, and adds real value for citizens.

How This Will Help Dublin

Jamie Cudden, the Smart City Program Manager at Dublin City Council, shared his enthusiasm about the project, noting that Generative AI could significantly improve how the council delivers services, communicates with the public, and prepares for the future. He highlighted the importance of working with Trinity College Dublin and ADAPT Research Centre’s world-class expertise to ensure that the AI solutions used are not only innovative but also responsible.

Professor John D. Kelleher, the Director of ADAPT, also spoke about the lab’s role in pushing the boundaries of AI research, while making sure that any new technologies are aligned with the public’s best interests. He highlighted that the collaboration between ADAPT and the City Council would allow them to create AI systems that truly benefit the people of Dublin.

 

Working Together for Real Impact

Over the next year, the Gen-AI Lab will hold a series of workshops, prototype developments, and public engagement events. The lab will work closely with Dublin City Council staff to identify areas where AI can improve operations, from speeding up administrative tasks to better responding to customer service requests. The goal is to make the council’s services more efficient and effective while giving staff the necessary training to embrace AI in their everyday work.

The initiative also aims to set an example for other cities, not only in Ireland but around the world, by promoting best practices for how to responsibly adopt and govern AI technology. Through this project, Dublin hopes to create a framework for ethical AI use that can be applied internationally.

Looking to the Future

This creative collaboration shows Dublin’s commitment to staying ahead of the curve in digital innovation. By combining academic research with real-world needs, the Gen-AI Lab will help shape the future of public services, making them more efficient, responsive, and accessible to everyone. It’s a perfect example of how technology can be used for good when it’s applied thoughtfully and responsibly

 
 
 
 

Saudi Aramco Unveils 2024 Results Amid Ongoing Market Downturn

0
Saudi Aramco Announces Its Results… While the Market Continues Its Steady Decline
Mar 8, 2025
Saudi Aramco Announces Its 2024 Results
Saudi Aramco has announced that it achieved a net profit exceeding SAR 398.42 billion by the end of the 2024 fiscal year.
Revenues:
Revenues rose to SAR 428,591.00 million in the fourth quarter of 2024, compared to SAR 459,287.00 million in the same quarter of the previous year, marking a decline of (6.7%), and SAR 464,625.00 million in the prior quarter, reflecting a drop of (7.8%).
Net Profits:
Profits decreased to SAR 86,756.00 million in the fourth quarter of 2024, compared to SAR 102,867.00 million in the same quarter last year, a decline of (15.7%), and SAR 97,621.00 million in the previous quarter, down by (11.1%).
Additionally, Saudi Aramco announced cash dividends for the fourth quarter of 2024:
  • A dividend of SAR 0.3312 per share for Q4 2024.
  • A total distributed amount of SAR 80.10 billion for Q4 2024 profits.
Summary of the Company’s Financial Performance for 2024:
Revenues increased by 5.7% to reach SAR 1,801,674.00 million, though they declined by (2.9%) compared to the previous year. This is attributed to:
  • Lower crude oil prices and volumes sold.
Total Annual Results for 2024:
  • Revenues: Revenues rose to SAR 1,801,674.00 million, compared to SAR 1,856,373.00 million the previous year, a decrease of (2.9%).
  • Net Profits: Profits fell to SAR 393,891.00 million, compared to SAR 452,753.00 million last year, down by (13.0%).
  • Earnings Per Share: Earnings per share increased to SAR 1.63, compared to SAR 1.87 the previous year, reflecting a decline of (13.0%).
Source: Tadawul – Saudi Bulletin