But at what cost? These systems collect bits of your personal data. They might build a profile of your behaviour. In this blog, we will explain how these overviews collect and use your data, the privacy risks involved, and what you can do. We also look at whether existing laws protect you from AI profiling.
How AI Overviews Collect and Use Your Personal Data?
When you use a service that gives a personalised overview, it often gathers data from many sources. That could include your browsing history, location data, purchase records, social media likes or messages. Some platforms track your habits across apps and websites, even products you don’t buy.
This data enters large datasets. The service then matches your data to content, to make suggestions or summaries just for you. These overviews increase in accuracy over time. They adjust based on your response patterns. Some systems keep records of your chats or interactions. Others infer your interests from subtle cues, like location or time stamps.
Why Personalised AI Overviews Carry Privacy Risk?
Personalisation brings real value. It gives convenience, quicker access to relevant information and useful suggestions. However, it also raises certain concerns. First, these overviews rely on collecting more personal data than is necessary. Providers may cast wide nets under the guise of convenience. They often get away with broad privacy notices that allow them to collect data “just in case”.
Second, inference risk arises. The system may predict something sensitive about you, like your health, finances, or beliefs. These predictions shape your experience and might influence decision‑making about you without your knowledge.
Third, algorithmic bias may affect what recommendations you receive. Biased data or skewed models can reinforce unfair patterns, favouring one group or viewpoint over another.
Fourth, personal data may leak or be repurposed. Data originally collected for an overview might later be used for training or marketing without clear consent. This is common in major tech services, which build large profiles on users.
Top Privacy Risks of Personalised AI Overviews
Let us break down the top risks clearly:
- Excessive Data Collection: Services gather more information than needed and keep it longer.
- Security & Fraud Risk: Large-scale data collection can expose you to certain risks. If bad actors access AI models, they may generate tailored phishing attacks or impersonation content. Fraudsters could clone your voice or use personal details to exploit you.
- Profiling and Inference: The system may form assumptions about your character, habits or health. Some inferences might be wrong or misleading.
- Bias and Discrimination: AI sometimes draws on biased data. If a system has bias, it may produce unfair content or reinforce stereotypes.
- Lack of Meaningful Consent: Even when consent is asked, it may be vague, bundled or hard to withdraw.
- Limited Control Over Deletion or Opt‑Out: Some services make it difficult to erase your data or stop further profiling, even when offered.
How Users Can Protect Their Privacy?
You can take practical steps to reduce privacy risks associated with personalised AI overviews:
- Review privacy settings on any app that generates personalised overviews. Turn off unnecessary tracking, set preferences, or opt out of personalised recommendations where possible.
- Read privacy policies carefully. Use AI tools like chatbots if helpful to summarise difficult legal language for you. Ask questions like: “What data will you collect? Will you use it later for another purpose?”.
- Limit the sharing of private data in chats or profiles. Do not submit sensitive data like voice or location permissions unless absolutely necessary.
- Delete history and opt out if the service allows. Some platforms make deletion tricky. Follow their process where you can.
- Choose services with a good reputation for data protection. Look for platforms that practise “data minimisation” and support transparency.
- Advocate for stronger laws. Support policies that require informed consent, limit profiling, and enforce deletion rights.
Do Current Privacy Laws Protect Against AI Profiling?
In the UK and the EU, the GDPR and the UK GDPR apply. They cover profiling and automated decision‑making. Under Article 22, you have the right to be free from decisions that are made solely based on automated processing, including profiling, if they produce legal or significant effects.
You also have the right to know what data is processed, to access it, to correct it, to delete it, and to port it. Profiling that affects you must include safeguards, such as human oversight and an explanation of the logic involved.
However, these laws are not tailored to modern AI use. They apply to AI as they do to any automated system, with general rules. Enforcement and clarity may be slow as new AI applications emerge. Regulators in the EU and UK released guidance at the end of 2024 on AI and data protection to help close these gaps.
In June 2025, the UK passed the Data (Use and Access) Act 2025, which includes some provisions on how data may be used to train generative models. Still, many details remain unclear on user consent when data feeds AI systems. Critics note that the current laws offer baseline protection and still lack strong demands for transparency about AI processes or profiling. They lack enforceable rights to explainable AI.
Conclusion
Personalised overviews may offer helpful information, but they come at the cost of your privacy. With data collection often going far beyond what’s necessary, the risks of profiling, bias, and misuse are real. While laws like GDPR offer some protection, they’re still catching up with how modern tools operate. The best defence is awareness and action. You can reduce the risk by reviewing your settings, choosing trusted services, and speaking up for better regulation. Need help safeguarding your online presence or navigating privacy tools? Contact us at Rankingeek Marketing Agency. We’re here to support you and help secure your online presence!