Browser Privacy
Use a privacy-focused browser like Brave, not just Chrome’s incognito mode.
Mark thought he was being private by using Chrome’s Incognito mode to shop for a gift. The next day, he was bombarded with ads for that exact gift on every website he visited. His friend, Sarah, used a privacy-focused browser like Brave. It automatically blocked trackers and ads. She shopped for a gift, and her browsing remained her own business. Mark learned that “Incognito” doesn’t stop websites from tracking you; it only clears your local history. True browser privacy requires actively blocking the trackers themselves.
Stop doing “Accept All” on cookie banners. Do take 10 seconds to customize your preferences instead.
Anna was in a hurry and clicked “Accept All” on every cookie banner she saw. Her online experience became a maze of eerily specific ads, following her from site to site. Her brother, Ben, started taking an extra 10 seconds on each site. He would click “Customize” and disable all “Marketing” and “Advertising” cookies. Ben’s browsing experience was cleaner, and he was no longer being relentlessly stalked by advertisers. He learned that those 10 seconds were a small price to pay for reclaiming his digital privacy.
The #1 secret for blocking browser fingerprinting that ad networks don’t want you to know.
An ad network was tracking a user even though she used a VPN and cleared her cookies. They were using “browser fingerprinting”—combining unique details like her screen resolution, installed fonts, and browser version to create a unique ID. She thought she was anonymous but she wasn’t. The secret to fighting this is to use a browser that actively randomizes these details. By using a privacy browser with fingerprinting protection, her browser looked just like millions of others, making her effectively invisible in the crowd.
The biggest lie you’ve been told about “private browsing” mode.
The lie is that it makes you anonymous. A student used his library’s computer in private browsing mode to check his social media. He thought his activity was hidden. He didn’t realize that the library’s network administrator, his internet service provider, and the websites he logged into could still see everything he was doing. Private browsing only does one thing: it prevents the browser from saving your history and cookies on that specific computer. It doesn’t hide your activity from the network itself.
I wish I knew this about the privacy risks of browser extensions when I first started using them.
I installed a “free” browser extension that promised to find the best coupons when I shopped online. It worked great. What I didn’t know was that the extension was also scanning every single website I visited and selling my complete browsing history to a data broker. I wish I had known that many browser extensions, especially free ones, are not free at all. They are paid for with your personal data. I now only install extensions from reputable developers and carefully review the permissions they request.
I’m just going to say it: Your browser history is a more accurate profile of you than your social media.
A person’s social media profile was a carefully curated collection of happy vacation photos and professional accomplishments. Her browser history, however, told a different story. It revealed her late-night health worries, her secret hobbies, her political leanings, and her financial anxieties. It was a raw, unfiltered record of her true thoughts and concerns. The companies that track this data don’t just know the person you pretend to be; they know the person you actually are.
99% of internet users make this one mistake with their browser’s privacy settings.
The most common mistake is never changing the default settings. A user bought a new computer and used the browser as it came out of the box. She didn’t realize the default settings were designed for convenience, not privacy. They allowed third-party cookies, extensive tracking, and sent a huge amount of data back to the manufacturer. By spending just five minutes in the settings menu to enable stricter tracking protection and disable data sharing, she could have significantly improved her privacy posture.
This one small action of changing your default search engine to a private one will change the way you’re tracked online forever.
A person used Google for all her searches. She noticed that if she searched for a product, ads for that product would follow her around the internet for weeks. She took one small action: she changed her browser’s default search engine to a private alternative like DuckDuckGo. Suddenly, the creepy, targeted ads stopped. Private search engines don’t track your search history or build a profile of you. This single change severed the link between her curiosity and the advertising industry.
The reason your ad blocker isn’t fully working is because you haven’t enabled advanced filter lists.
A user had an ad blocker installed but was still seeing sponsored posts and some non-intrusive ads. He thought his ad blocker was broken. He went into the settings and found a section for “filter lists.” He enabled several advanced lists, such as “annoyance” filters that block social media widgets and “privacy” filters that block known tracking domains. Suddenly, his web browsing experience became significantly cleaner and faster, because he had given his ad blocker more powerful rules to work with.
If you’re still syncing your browser history across all devices without encryption, you’re losing your digital privacy.
A person loved the convenience of syncing his browser history between his work computer, his home laptop, and his phone. He didn’t realize that this history was being stored in the cloud in a way that the tech company could read. When his account was compromised, the hacker gained access to his entire, unified browsing history from all devices. By using a browser that offers end-to-end encrypted syncing, he could have the convenience of syncing without sacrificing his privacy to the cloud provider.
Email Privacy
Use an encrypted email service like ProtonMail, not just Gmail for sensitive communication.
A journalist was communicating with a sensitive source. If she had used her standard Gmail account, the email provider could have been compelled to hand over the contents of their conversation to a government agency. Instead, she insisted they both use an end-to-end encrypted email service. Because the emails were encrypted on her device and could only be decrypted by the source, not even the email provider could read their contents. This protected her source and the integrity of her story.
Stop using your personal email for online sign-ups. Do use an email alias service instead.
A person used her main personal email address to sign up for a new online store. A year later, that store had a data breach, and her email address was leaked. She was immediately flooded with spam and phishing attempts. Her friend, using a service like SimpleLogin or Firefox Relay, created a unique, random email alias for every online service. When one of those aliases started receiving spam, she knew exactly which service had been breached and could simply delete the alias, protecting her real inbox.
The #1 hack for preventing email trackers from knowing when and where you open emails.
The secret is to disable the automatic loading of images in your email client. Many marketing emails contain a tiny, invisible 1×1 pixel image. When you open the email, your client loads this image from a server, which tells the sender that you opened their email, at what time, and often from what IP address. By setting your email client to not load images by default, you break this tracking mechanism, and the sender has no idea if you’ve read their message.
The biggest lie you’ve been told about the privacy of your work email.
The lie is that you have any privacy at all. An employee was using his work email to have a “private” conversation with a colleague, complaining about his boss. He was shocked when he was called into an HR meeting where they showed him a printout of that exact conversation. He didn’t realize that his work email account and everything in it is the property of his employer. The company has the legal right and the technical ability to read any email sent on their system.
I wish I knew this about how email metadata reveals my location when I was starting my career.
When I was traveling and sending emails from different hotels and cafes, I didn’t realize that the header of every email I sent contained the IP address of the network I was using. This metadata created a detailed record of my physical location over time, accessible to anyone who knew how to read the email’s source. I wish I had known to use a VPN, which would have masked my real IP address and protected this sensitive location information from being leaked in every single email.
I’m just going to say it: The concept of a private email conversation on a free service is a myth.
A user was emailing a friend about her plans to go on a beach vacation. A few minutes later, she saw ads for hotels and flights to that exact destination on a social media site. She was confused. The reason? Her free email provider scans the content of every email to build a detailed advertising profile. If the service is free, you are not the customer; you are the product. The “price” you pay for the free service is your privacy.
99% of people make this one mistake when unsubscribing from email lists.
The most common mistake is clicking the “unsubscribe” link in an obvious spam or phishing email. A person received a sketchy email and, wanting to stop receiving them, clicked “unsubscribe.” This didn’t remove her from the list. Instead, it confirmed to the spammer that her email address was active and being read. This made her email address more valuable, and she was immediately sold to other spammers, resulting in an even bigger flood of junk mail. If an email looks like spam, just mark it as spam and delete it.
This one small habit of using PGP encryption for important emails will change the way you protect your communications forever.
Two business partners needed to exchange a highly sensitive document containing trade secrets. Sending it as a standard email attachment would be like sending a postcard; anyone who intercepted it could read it. They took the time to set up PGP (Pretty Good Privacy). By encrypting the email with the other’s public key, they ensured that only the intended recipient, with their unique private key, could ever decrypt and read the document. This habit provided them with true, military-grade security for their most important communications.
The reason you’re getting so much spam is because your email address has been sold by data brokers.
A person was wondering why she was suddenly getting so much spam. The reason was that a service she had signed up for years ago had been part of a major data breach. Her name, email, and password were stolen and then sold in bulk to data brokers and spammers on the dark web. This is the lifecycle of most spam. It starts with a data breach at a single, legitimate company and ends with your email address being on thousands of spammers’ lists.
If you’re still using an email address that includes your birth year, you’re losing a key piece of your identity to data miners.
A person created an email address like “john.smith1985@email.com.” He didn’t realize that he was giving away a critical piece of his personal information to every website he signed up for and every person he emailed. Scammers and identity thieves use your date of birth as a key data point to answer security questions and impersonate you. A simple, professional email address that doesn’t reveal your age is a small but important step in protecting your identity.
VPNs
Use a reputable, audited, no-logs VPN, not a free VPN service.
A student, trying to save money, used a free VPN service to browse the internet. He didn’t realize the VPN company was logging all of his browsing activity and selling it to data brokers. His friend, however, paid a few dollars a month for a reputable VPN service that had a strict, independently-audited “no-logs” policy. She knew that her online activity was not being recorded or sold. The “free” VPN cost the student his privacy, while the paid VPN protected it.
Stop thinking a VPN makes you completely anonymous. Do understand its limitations and use it as part of a larger privacy strategy instead.
A person used a VPN and thought he was completely anonymous. He then proceeded to log into his personal Google and Facebook accounts. The VPN hid his IP address, but it didn’t hide the fact that he was logged into his real-name accounts. A VPN is a powerful tool for privacy, but it is not a magic invisibility cloak. True anonymity requires a layered approach, including using privacy-focused browsers, compartmentalizing your online identities, and being mindful of the services you log into.
The #1 secret for choosing a trustworthy VPN that marketing sites won’t tell you.
The secret is not to look at the speed tests or the number of servers, but to look for independent, third-party security audits. Many VPNs make bold claims about their “no-logs” policy, but these are just marketing words. A trustworthy VPN will hire an outside cybersecurity firm to conduct a rigorous audit of their systems and then publish the results for everyone to see. This independent verification is the only real proof that a VPN provider is living up to their privacy promises.
The biggest lie you’ve been told about “military-grade encryption” in VPN ads.
The lie is that “military-grade encryption” is a special, exclusive feature. Almost every VPN service advertises this. In reality, it’s just a marketing term for AES-256, which is the public, industry-standard encryption algorithm used by banks, governments, and countless other services. It is very secure, but it is not a unique feature that one VPN has and others don’t. It’s the standard, baseline level of security you should expect from any reputable service.
I wish I knew this about the difference between a VPN and a proxy server when I first started trying to protect my privacy.
When I first started, I used a free web proxy to try and hide my IP address. I thought it was making me secure. I wish I had known that a proxy only routes the traffic from your web browser, while a VPN secures the traffic from your entire computer. More importantly, a standard proxy does not encrypt your traffic. My internet service provider could still see every website I was visiting. A VPN creates an encrypted tunnel for all of your internet traffic, providing a much higher level of security and privacy.
I’m just going to say it: Most free VPNs are selling your data.
A free VPN provider has to pay for its servers and infrastructure somehow. If you are not paying for the product, you are the product. Many free VPN services have been exposed for monitoring their users’ browsing habits, collecting their personal data, and selling it to third-party advertisers and data brokers. They are not privacy tools; they are data collection tools disguised as privacy tools. A reputable, paid VPN has a clear business model where their incentive is to protect your privacy, not to exploit it.
99% of new VPN users make this one mistake when setting it up.
The most common mistake is failing to enable the “kill switch” feature. A person was using a VPN to download a file. His Wi-Fi connection flickered for a moment, the VPN disconnected, and his computer immediately reconnected with his real, exposed IP address. If he had enabled the kill switch, the VPN software would have instantly blocked all internet traffic the moment the VPN connection dropped, ensuring that his real IP address was never accidentally leaked.
This one small action of enabling your VPN’s kill switch will change the way you protect against data leaks forever.
A journalist was using a VPN to communicate securely from a country with heavy surveillance. Her internet connection was unstable. She had enabled her VPN’s kill switch. When her connection briefly dropped, the kill switch automatically cut off her internet access completely. This prevented her computer from sending any unencrypted traffic over the local network, which could have revealed her identity and her activity. This one small setting was a critical fail-safe that protected her from a potentially dangerous data leak.
The reason your VPN is so slow is because you’re connecting to a server on the other side of the world.
A person in the United States was complaining that his VPN was making his internet incredibly slow. He looked at his VPN client and realized it had automatically connected him to a server in Australia. His internet traffic was traveling halfway around the world and back again. The speed of light is a hard limit. By simply choosing a server that was physically closer to him, like one in a nearby city, his VPN speed increased dramatically and was nearly as fast as his normal connection.
If you’re still using public Wi-Fi without a VPN, you’re losing your personal information to anyone on the network.
A person sat in a coffee shop and connected to the public Wi-Fi to check her bank balance. She didn’t use a VPN. An attacker on the same network used a simple “packet sniffing” tool to intercept all the unencrypted traffic. He was able to see the websites she was visiting and capture the data she was sending. If she had used a VPN, all of her traffic would have been encrypted, and the attacker would have only seen a stream of indecipherable gibberish.
Social Media Privacy
Use privacy-focused front-ends for social media, not the official apps.
A user was tired of the ads, the algorithmic timeline, and the constant tracking of the official Twitter app. She discovered Nitter, an open-source, privacy-focused front-end. It allowed her to read Twitter feeds in a clean, chronological interface without any ads or tracking. By using privacy-focused front-ends, she could still access the public content on social media platforms without having to subject herself to their invasive data collection and engagement-driven algorithms.
Stop doing public “check-ins” at locations. Do share your experiences after you’ve left instead.
A family went on vacation and would post “check-ins” and photos from their hotel every day. They were broadcasting to the world that their home was empty. A smarter family enjoyed their vacation and then posted their beautiful photos once they had safely returned home. By sharing their experiences after the fact, they were able to share their joy with friends and family without compromising their physical security.
The #1 tip for locking down your Facebook privacy settings that the platform makes hard to find.
The single most important setting is the “Off-Facebook Activity” tool. A user was shocked to find that Facebook had a list of hundreds of other apps and websites she had visited that were sharing her activity with the platform. This is how Facebook knows you looked at a pair of shoes on a retail website. By using this tool to clear her history and disconnect future off-platform activity, she was able to significantly limit one of the most invasive ways the platform tracks you across the entire internet.
The biggest lie you’ve been told about the “friends only” privacy setting.
The lie is that setting a post to “friends only” means only your friends will ever see it. A person posted a private, embarrassing story, thinking it was safe. One of her friends took a screenshot and shared it publicly. Her “private” post was now viral. The “friends only” setting only controls what the platform shows; it doesn’t control what other humans do. You should never post anything online that you wouldn’t be comfortable with being on a public billboard, because a single screenshot can make it so.
I wish I knew this about how social media platforms use my “likes” to build a psychological profile of me.
I used to “like” pages and posts on social media without a second thought. I “liked” certain movies, political pages, and brands. I wish I had known that researchers have shown that by analyzing these seemingly innocent “likes,” a platform’s algorithm can infer incredibly sensitive personal attributes, like your political affiliation, your sexual orientation, and even your personality traits, with a high degree of accuracy. This psychological profile is then used to manipulate you with hyper-targeted advertising and content.
I’m just going to say it: Deleting your social media accounts is the only way to truly reclaim your privacy from them.
A person was tired of the constant tracking, the mental health impact, and the privacy scandals. She went through all the privacy settings, but she still felt like she was being exploited. She finally made the decision to permanently delete her accounts. A month later, she felt a profound sense of relief. She was no longer being constantly monitored, her attention was no longer being sold to advertisers, and she had more time for real-world connections. While not for everyone, it is the only guaranteed way to opt-out.
99% of social media users make this one mistake with third-party app permissions.
The most common mistake is taking a fun-looking online quiz, like “Which Disney Princess Are You?”, and blindly granting the quiz app access to their social media profile. A user did this and didn’t realize she had just given a shady, unknown company access to her friend list, her personal information, and her posts. These quizzes are often just a Trojan horse for data harvesting. Always be extremely cautious about connecting any third-party application to your social media accounts.
This one small habit of regularly reviewing your privacy settings will change the way you control your data on social media forever.
A social media platform released a new update. Buried in the update, a new, invasive data-sharing feature was enabled by default. A user who never checked his settings was now sharing more data than he realized. His friend, however, had a habit of doing a “privacy check-up” on her accounts once a month. She would go through the privacy settings and ensure they were still configured the way she wanted. This small habit protected her from the “privacy creep” that happens when platforms quietly change their defaults.
The reason you’re seeing eerily specific ads is because you’re not limiting off-platform activity tracking.
A person was talking to a friend about needing a new tent. He never searched for it. The next day, his social media feed was full of ads for tents. He thought his phone was listening to him. The real reason was likely more mundane. His friend had searched for tents, and the platform’s social graph knew they were connected and likely to have similar interests. Or, he had previously visited a camping website that shared his activity. Limiting this off-platform tracking is a key step to stopping these “creepy” ads.
If you’re still using your real name and photo on every social media platform, you’re losing your anonymity.
A person used her full real name and a clear headshot on all of her public social media profiles. This made it incredibly easy for a stranger to find out where she worked, who her family members were, and other personal details. This made her a target for harassment and doxxing. Using a pseudonym or a handle, and a more anonymous profile picture, on public platforms allows you to participate in online communities while maintaining a crucial layer of separation between your online persona and your real-world identity.
App Permissions
Use granular app permission controls, not just accepting all defaults.
A person installed a new game. During setup, she chose the “Allow While Using the App” option for location services, not the “Always Allow” option. She also denied its request to access her contacts. By taking a moment to choose the most restrictive permission that still allowed the app to function, she was able to enjoy the game without giving the developer constant, unnecessary access to her sensitive personal data. She was in control of her data, not the app.
Stop giving apps access to your contacts and microphone. Do question why an app needs a certain permission before granting it.
A user downloaded a photo editing app. It asked for permission to access her microphone and her entire contact list. She stopped and asked herself, “Why would a photo editor need to listen to my conversations or know who my friends are?” She realized there was no legitimate reason. She denied the permissions, and the app still worked perfectly. She learned to be skeptical and to question the motives of any app that asks for permissions that are not essential for its core functionality.
The #1 secret your apps don’t want you to know about the data they collect in the background.
The secret is that many apps continue to collect data even when you are not actively using them. A weather app, for example, might have permission to access your location “always.” This means it is likely recording your precise location every few minutes, 24/7, and sending that data back to its servers to be sold to data brokers. This background data collection is a huge source of revenue for many “free” apps, and it’s happening without your direct knowledge.
The biggest lie you’ve been told about apps needing your location to function.
The lie is that an app needs your precise, “always on” location to work. A user wanted to get local news updates from an app. The app requested “precise location” access. Instead, she chose to only grant it access to her “approximate location.” The app was still able to determine her city and provide her with the relevant news, but it didn’t have a record of her exact home address or her every move. For many applications, a general location is all that’s needed.
I wish I knew this about how flashlight apps were harvesting my data when I first got a smartphone.
In the early days of smartphones, I downloaded a simple flashlight app. It was free. I didn’t think twice when it asked for permission to access my contacts and my location. I just wanted a flashlight. I wish I had known that many of these early, simple utility apps were a scam. Their real purpose was not to be a flashlight, but to be a data harvesting tool. They would collect users’ personal information and sell it, all under the guise of providing a simple, free service.
I’m just going to say it: The “free” app you just downloaded is being paid for with your personal data.
A person downloaded a “free” game. He was then shown a constant stream of highly personalized ads. He also noticed his phone’s battery was draining faster. The game was not free. He was paying for it with his attention (by watching ads), with his battery life (from the background tracking), and, most importantly, with his personal data, which was being collected and sold to advertisers to create the personalized ads he was seeing. If you’re not the customer, you’re the product.
99% of smartphone users make this one mistake when installing a new app.
The most common mistake is blindly clicking “Agree” and “Allow” to every single pop-up during the installation process without reading what they are for. A user is excited to use their new app, so they rush through the setup. In the process, they grant the app access to their camera, microphone, location, and contacts without a second thought. This is how a simple game ends up having the ability to record your conversations and track your every move.
This one small action of conducting a monthly “permission audit” on your phone will change your mobile privacy forever.
A user was concerned about his mobile privacy. He set a calendar reminder for the first of every month to conduct a “permission audit.” He would go into his phone’s settings and review every app that had access to sensitive permissions like his location, microphone, and camera. He was often surprised to find apps he hadn’t used in months that still had access. This simple, 10-minute monthly habit ensured that only the apps he actively used and trusted had access to his data.
The reason your battery drains so fast is because you’re allowing too many apps to run in the background.
A person’s phone would be dead by 3 PM every day. She thought her battery was old. She looked at her phone’s settings and discovered that dozens of apps had permission for “Background App Refresh.” This meant they were constantly waking up to check for new content, even when she wasn’t using them. By turning this permission off for all but her most essential messaging apps, she found that her battery life nearly doubled.
If you’re still granting “always on” location access to apps, you’re losing a detailed record of your every move.
A user had granted “always on” location access to several apps. He used a feature on his phone to see his location history and was horrified. He saw a map with a detailed, minute-by-minute record of his movements for the past year—his home, his workplace, his doctor’s visits, his friends’ houses. He realized he had been giving these companies a complete surveillance log of his entire life. He immediately changed the setting for all non-essential apps to “only while using the app.”
Data Brokers
Use a data removal service, not trying to manually opt-out of hundreds of data brokers.
A person was horrified to find his home address and phone number on a “people search” website. He went through the site’s complex opt-out process. A week later, he found his information on a dozen other, similar sites. He realized that manually opting out of the hundreds of data brokers that exist is a nearly impossible, full-time job. He decided to use a data removal service, which automatically scans for his information and sends opt-out requests on his behalf, saving him hundreds of hours of frustrating work.
Stop giving out your personal information for discounts. Do use a “burner” identity for marketing sign-ups instead.
A person would sign up for store loyalty cards and online discounts using her real name, email, and phone number. Her information was then sold to data brokers, and she was inundated with spam and robocalls. Her privacy-conscious friend used a “burner” identity. She had a separate email address and a voice-over-IP phone number that she used exclusively for these kinds of marketing sign-ups. This kept her real identity out of the hands of data brokers and her real inbox and phone spam-free.
The #1 hack for finding out which data brokers are selling your information.
The secret is to use a unique email alias for every service you sign up for. A person used a service that let him create an infinite number of email aliases that all forwarded to his real inbox. He would use a unique alias like “nameofstore@mydomain.com” for each sign-up. A few months later, he started getting spam sent to the “nameofstore” alias. He now had definitive proof that that specific store had either sold or leaked his information to a data broker.
The biggest lie you’ve been told about data anonymization.
The lie is that when a company “anonymizes” your data before selling it, your privacy is protected. A data broker sold a large, “anonymized” dataset of people’s web browsing habits. The dataset didn’t have names, but it did have zip codes, birth dates, and genders. Researchers were able to take this supposedly anonymous data, cross-reference it with public records, and re-identify specific individuals with a high degree of accuracy. True anonymization is incredibly difficult, and most companies don’t do it properly.
I wish I knew this about the existence of data brokers when I was younger and signing up for everything online.
When I was a teenager, I would sign up for every new social media site, online game, and forum using my real information. I had no idea that a shadowy, multi-billion dollar industry of “data brokers” existed. I didn’t know that every piece of information I shared was being collected, aggregated, and sold to create a detailed profile of me that would be used by advertisers, insurance companies, and even potential employers for the rest of my life.
I’m just going to say it: The data broker industry is the shadowy underbelly of the internet.
You have never directly interacted with a data broker, yet they know your name, your address, your income, your political affiliation, and your health concerns. They operate in the shadows, scraping information from public records, buying it from apps and services, and then selling these detailed profiles of you to the highest bidder without your knowledge or consent. It is a largely unregulated industry that profits from the surveillance of our daily lives.
99% of people make this one mistake that gets their information onto data broker lists.
The most common mistake is participating in online surveys, quizzes, or signing up for “free” product samples with their real information. A person filled out a survey to be entered into a drawing for a free gift card. She didn’t read the fine print, which stated that her information would be shared with “marketing partners.” She had just willingly handed over her personal data, which was then immediately sold to a data broker. These seemingly innocent offers are often just a front for data collection.
This one small action of opting out of the top 5 data brokers will significantly reduce the amount of spam and robocalls you receive forever.
A person was being driven crazy by the constant stream of spam emails and robocalls. She learned that a huge percentage of this unwanted contact originates from a handful of major data brokers. She took an afternoon and went through the complex but possible opt-out process for the five largest data brokers. A few weeks later, she noticed a dramatic and lasting reduction in the amount of spam she received, because she had cut off the problem at one of its biggest sources.
The reason you’re a target for identity theft is because your personal information is for sale on the dark web for a few dollars.
A person became a victim of identity theft. A criminal had opened a credit card in her name. She couldn’t understand how they got her information. The reality is that after years of data breaches at major companies, her name, address, date of birth, and even her social security number were likely part of a large dataset being sold on the dark web for a few dollars. Data brokers have made the raw materials for identity theft cheap and easily accessible to criminals.
If you’re still not actively trying to remove your data from broker sites, you’re losing control over your identity.
Two people had their data exposed in a data breach. The first person did nothing. Her information was picked up by data brokers and remains easily searchable online. The second person took proactive steps. She froze her credit and used a data removal service to get her information taken off the major people-search sites. While not a perfect solution, she took active control of her digital identity, making it much harder for scammers and criminals to find and exploit her personal information.
Digital Footprint
Use a proactive approach to managing your digital footprint, not a reactive one after a problem occurs.
A person was applying for jobs and was suddenly worried about the embarrassing photos her friends had tagged her in on social media years ago. She was now reactively scrambling to clean up her digital footprint. Her friend, however, had always been proactive. She regularly reviewed her privacy settings, untagged herself from unflattering photos, and maintained a professional online presence. When it came time for her job search, she was confident and prepared, not panicked.
Stop thinking that deleting something from the web means it’s gone forever. Do use the “right to be forgotten” where possible instead.
A teenager wrote an embarrassing blog post. A few years later, mortified, he deleted the blog. He thought it was gone forever. He didn’t know about the Internet Archive’s Wayback Machine, which had already saved a permanent copy. In some jurisdictions, like the EU, he could have used his “right to be forgotten” to request that search engines delist the content, making it much harder for people to find, even if a copy still exists somewhere.
The #1 tip for cleaning up your old, embarrassing online accounts.
The best tip is to search your old email accounts for welcome emails. A person wanted to delete the cringe-worthy accounts she had created as a teenager, but she couldn’t remember all of them. She logged into her first-ever email account and searched for words like “welcome,” “register,” and “new account.” She found a treasure trove of old accounts she had completely forgotten about, which allowed her to go and delete them one by one.
The biggest lie you’ve been told about the permanence of the internet.
The lie is that everything on the internet is permanent. While it’s true that things can be difficult to remove, the web is also constantly changing. A person was worried about a negative comment on an old, obscure forum. A year later, that entire website had shut down and the comment was gone. Websites go offline, companies go out of business, and content is lost. While you should always be careful about what you post, the internet is not a perfect, monolithic archive.
I wish I knew this about how employers scrutinize a candidate’s digital footprint when I was first applying for jobs.
When I was in college, I used a public social media account to post photos from parties and share controversial political opinions. I didn’t think anyone would care. When I was applying for my first professional job, I was rejected after the final round and I never knew why. I wish I had known that a huge percentage of employers check a candidate’s social media profiles as part of their screening process. My unprofessional digital footprint likely cost me that opportunity.
I’m just going to say it: Your digital footprint is your permanent record.
In the past, a youthful indiscretion might be forgotten over time. Today, a foolish tweet, an embarrassing photo, or an angry blog post can be screenshotted, archived, and attached to your name forever. This digital footprint can be accessed by future employers, romantic partners, and anyone else who decides to search for your name. We are the first generation to have a publicly accessible, permanent record of our entire lives, and we must act accordingly.
99% of young people make this one mistake that will haunt their future digital footprint.
The most common mistake is using their full, real name on a public profile where they are sharing personal or unprofessional content. A high school student would post edgy memes and complain about her teachers on a public Instagram account that used her real name. She didn’t realize that these posts would be the first thing a college admissions officer or a future employer would see when they googled her name ten years later. Using a pseudonym for casual online activity is a crucial privacy habit.
This one small habit of “Googling” yourself regularly will change the way you manage your online reputation forever.
A professional was surprised when a new acquaintance mentioned a very old and obscure hobby of hers. She was confused about how they knew. She googled her own name and discovered that an old forum profile she had made years ago was one of the top search results. She adopted a new habit: once a month, she would perform a search for her own name in a private browsing window. This allowed her to see what a stranger would see and to proactively manage her digital footprint by deleting old accounts or updating privacy settings.
The reason you’re not getting job offers might be because of what a recruiter found in your digital footprint.
A candidate had a perfect resume and aced her phone interview. But she never got called for an in-person interview. The reason? The recruiter did a quick search of her name and found a public blog where she had written extensively about her political views in an aggressive and unprofessional manner. The company, wanting to avoid potential workplace conflicts, decided to pass on her candidacy. What she had posted in her personal time directly impacted her professional opportunities.
If you’re still using the same username for every website, you’re losing your ability to compartmentalize your digital life.
A person used the same, unique username for his professional Twitter account, his anonymous Reddit account where he discussed sensitive health issues, and his online gaming account. A curious person could easily use a search engine to connect these different identities, shattering his privacy. By using different usernames for different contexts—one for professional life, one for hobbies, one for anonymous discussion—he could have created separate, compartmentalized digital identities that were not easily linked to each other.
Encrypted Messaging
Use an end-to-end encrypted messenger like Signal, not SMS or Facebook Messenger.
A political activist was organizing a protest. If she had used Facebook Messenger or a standard SMS group chat, the platform or the cell carrier could have been forced to hand over the contents of their conversations to the authorities. Instead, she insisted that all the organizers use Signal. Because Signal is end-to-end encrypted, only the people in the chat could read the messages. Not even the company that runs Signal could access their communications, ensuring their plans remained private and secure.
Stop taking screenshots of private conversations. Do respect the privacy of the person you’re talking to instead.
Two friends were having a private, venting session about a mutual acquaintance in what they thought was a secure chat. One of them took a screenshot of the conversation and shared it with another person. The “private” conversation was now public, and the friendship was broken. End-to-end encryption can protect your messages from outside hackers, but it cannot protect you from a breach of trust by the person you are talking to. The foundation of any private conversation is mutual respect.
The #1 secret for verifying the identity of your contact in an encrypted chat.
The secret is to verify their “safety number” or “security code” out-of-band. An encrypted app will show you a unique code for each chat. To ensure you are not being subjected to a “man-in-the-middle” attack, you need to verify that your code and your contact’s code match. The most secure way to do this is to call them or meet them in person and read the numbers to each other. This out-of-band verification ensures that the encrypted channel between you is secure.
The biggest lie you’ve been told about the security of WhatsApp.
The lie is that because WhatsApp is end-to-end encrypted, it is a completely private messaging app. While the content of your messages is encrypted, WhatsApp is owned by Meta (Facebook), and it still collects a huge amount of metadata. It knows who you are talking to, when you are talking to them, how often you talk, your location, and your contact list. For true privacy, you need a service that not only encrypts your messages but also collects the absolute minimum amount of metadata.
I wish I knew this about the difference between encryption in transit and end-to-end encryption when I started using messaging apps.
I used to think that the little green padlock icon on a messaging app meant my chats were private. I didn’t know that this usually just means the message is encrypted “in transit” to the company’s server. The company can still read my messages on their server. I wish I had known about end-to-end encryption, where the message is encrypted on my device and can only be decrypted by the recipient’s device. With end-to-end encryption, the company in the middle cannot read your messages, even if they wanted to.
I’m just going to say it: If a messaging app is not end-to-end encrypted by default, it does not respect your privacy.
Some messaging apps offer end-to-end encryption as an optional feature that you have to turn on, like Facebook Messenger’s “Secret Conversations.” The vast majority of users will never do this. This means that for most people, their conversations are not private. A company that truly respects user privacy will make end-to-end encryption the default for all conversations, not an obscure setting that most people will never find. Default settings are what matter.
99% of people make this one mistake when they switch to a secure messaging app.
The most common mistake is convincing one or two of their friends to switch to a secure app like Signal for their one-on-one chats, but continuing to have all of their group chats on an insecure platform like Facebook Messenger or Instagram DMs. Your communications are only as secure as your least secure conversation. If you are having sensitive discussions in a large group chat, that is the one that needs to be moved to a secure, end-to-end encrypted platform first.
This one small action of enabling disappearing messages will change the privacy of your conversations forever.
Two people needed to discuss a sensitive business matter. They were using an end-to-end encrypted messaging app, but they were still worried about the conversation being saved on their phones forever, where it could be found if a phone was lost or stolen. They enabled the “disappearing messages” feature, setting the messages to be automatically deleted from both of their devices after one hour. This one small action ensured that they could have a private conversation with no permanent, lingering digital record.
The reason your “private” group chat isn’t private is because it’s not end-to-end encrypted for all members.
A group of friends were using a messaging app that supported end-to-end encryption. However, one person in the group chat was still using an older version of the app that didn’t support it. Because of this single member, the entire group chat was downgraded to a less secure, unencrypted mode. For group encryption to work, every single member of the group must be using a compatible, secure application.
If you’re still sending sensitive information over SMS, you’re losing it to your cell carrier and potentially government surveillance.
A person texted their social security number to a family member. They didn’t realize that SMS is a technology from the 1980s and has virtually no security. The message travels in plain text over the cellular network, where it can be intercepted. The cell carrier also stores a record of all of your text messages. For sending any kind of sensitive personal or financial information, SMS is one of the least secure methods of communication you can possibly use.
Photo & File Metadata (EXIF)
Use a metadata stripping tool, not just uploading photos directly from your phone.
A journalist took a photo of a confidential source in a “safe” location. Before publishing the photo, she used a metadata stripping tool. The tool removed all the hidden EXIF data from the image file, including the exact GPS coordinates of where the photo was taken, the date and time, and the model of the camera used. By taking this one extra step, she protected the identity and location of her source from being discovered by analyzing the photo’s hidden data.
Stop posting photos online without considering the hidden data within them. Do scrub your photos of location data before sharing instead.
A person took a photo of his new, expensive bike inside his garage and posted it on a public forum. He didn’t realize the photo’s hidden metadata contained the precise GPS coordinates of his home. A thief was able to use this data to find his house and later steal the bike. Before posting any photo online, especially of valuable items or inside your home, it’s crucial to use a tool to view and remove this sensitive location data.
The #1 hack for removing all personal information from your documents before you share them.
The secret is to not just delete the visible information, but to also clean the file’s properties. A person was sharing a Word document. Before sending, she went to “File > Info > Inspect Document” and used the built-in Document Inspector. The tool found and removed all the hidden metadata, including the author’s name, the revision history with all the tracked changes and comments, and other hidden properties. This ensured that she was only sharing the final version of the document, not its entire hidden history.
The biggest lie you’ve been told about your files being “just files”.
The lie is that a file, like a photo or a PDF, contains only the information you can see. The reality is that almost every digital file contains a hidden layer of “metadata.” This metadata is a set of digital fingerprints that can reveal who created the file, when and where it was created, and what device was used to create it. This hidden data tells a story about the file’s origin that you may not want to share with the world.
I wish I knew this about how EXIF data revealed my home address from a photo I posted online.
I was proud of my new car, so I took a picture of it parked in my driveway and posted it online. I didn’t know anything about EXIF data. A stranger online was able to download the photo, extract the GPS coordinates embedded in the file, and post my home address in the comments. It was a terrifying experience. I wish I had known that my phone’s camera was automatically embedding my precise location into every single photo I took, and that I was broadcasting my home address to the entire internet.
I’m just going to say it: Every digital file you create tells a story about you that you may not want to be public.
A person submitted a resume as a PDF file. A potential employer opened the file’s properties and could see the name of the original author, the date it was created, and the title of the original Word document it was saved from, which was “desperate_job_search_resume_v12.docx.” Every file we create has a hidden digital trail. Understanding and managing this metadata is a critical part of maintaining a professional and private online identity.
99% of social media users make this one mistake when posting photos of their home or kids.
The most common mistake is posting photos that contain identifiable information in the background. A parent will post a cute photo of their child playing in the front yard. In the background, the house number and the street sign are clearly visible. In another photo, the child is wearing a school uniform with the school’s logo on it. This seemingly innocent information can be used by a stranger to easily identify a child’s home address and the school they attend.
This one small action of turning off location tagging in your camera app will change your privacy forever.
A person was concerned about their privacy. She went into her smartphone’s camera settings and turned off the setting that said “Save Location” or “Geotag Photos.” This one small action ensured that from that point forward, none of the photos she took would have her precise GPS coordinates embedded in them. It was a simple, one-time change that fundamentally improved the privacy of every single photo she would share in the future.
The reason people know so much about your life is because you’re leaking it through file metadata.
A person’s colleagues seemed to know surprisingly specific details about her new camera and the software she used. The reason? She had been sharing original photo files with them directly from her camera. The EXIF data in those files contained the exact camera model, lens, and even the version of the editing software she used. She was unknowingly leaking details about her personal technology choices with every photo she shared.
If you’re still sending original Word documents instead of PDFs, you’re losing control over your revision history.
An attorney sent a contract proposal to the opposing counsel as a Word document. The other lawyer was able to use the “Track Changes” feature to see all the previous versions of the contract, including the parts that had been deleted and the confidential comments that had been made in the margins. This revealed their entire negotiation strategy. By always converting the final document to a PDF, she could have sent a “flattened” version with none of this sensitive, hidden history.
Privacy Policies
Use a tool like “Terms of Service; Didn’t Read” (ToS;DR), not just blindly accepting privacy policies.
A person was about to sign up for a new online service. Instead of blindly clicking “I Agree,” she used a browser extension called ToS;DR. The extension gave her a simple, color-coded rating of the service’s privacy policy, highlighting the most important, anti-consumer clauses in plain English. It showed her that the service claimed ownership of her content. Based on this quick, easy-to-understand summary, she decided not to use the service.
Stop ignoring privacy policies. Do at least read the summary of how your data will be used instead.
A person would just scroll to the bottom of any privacy policy and click “Accept” without reading a single word. He was constantly surprised when his data was used in unexpected ways. He started taking an extra 30 seconds to at least read the short, plain-language summary that many services now provide at the top of their policies. While not as good as reading the whole thing, this small effort gave him a much better baseline understanding of what he was actually agreeing to.
The #1 secret for quickly identifying red flags in a privacy policy.
The secret is to use your browser’s “Find” function (Ctrl+F) to search for specific keywords. A savvy user, faced with a long privacy policy, will quickly search for words like “sell,” “share,” “third parties,” “affiliates,” and “marketing.” Where these words appear will instantly tell you how the company plans to monetize your data. She also searches for the phrase “at any time,” which often indicates that the company reserves the right to change its policy without directly notifying you.
The biggest lie you’ve been told is “we care about your privacy”.
Almost every privacy policy starts with a friendly, reassuring sentence like, “We value your privacy.” This is often immediately followed by thousands of words of dense legal text that grants the company broad rights to collect, use, and sell your personal information. This opening sentence is not a legally binding promise; it’s a public relations tactic designed to make you feel comfortable. You should always trust the legal details of the policy, not the friendly introduction.
I wish I knew this about how companies can change their privacy policies at any time when I was younger.
I signed up for a small, privacy-focused social media site. Their privacy policy was great. A few years later, the site was acquired by a large data company. The new owners changed the privacy policy to allow for extensive data collection and sharing with advertisers. Because I had agreed to the original terms, which included a clause that they could change the policy at any time, I had no recourse. I wish I had known that a good privacy policy today is no guarantee of a good one tomorrow.
I’m just going to say it: Privacy policies are intentionally written to be confusing and unreadable.
A privacy policy is a legal document. It is often not written with the goal of clearly informing the user. It is written by lawyers with the goal of providing the company with the maximum legal protection and the broadest possible rights to use your data. The use of dense, convoluted legal language is a deliberate feature, not a bug. It creates a situation where almost nobody can realistically read and understand the terms they are agreeing to.
99% of internet users make this one mistake every single day without realizing it.
The most common mistake, made by billions of people every day, is clicking a box that says “I have read and agree to the Terms of Service and Privacy Policy” when they have, in fact, done neither. This single click is a legally binding action. You are entering into a contract with the company, and you are doing so without reading the terms of that contract. It’s a mistake that is so common and routine that we no longer even recognize it as the significant legal act that it is.
This one small action of searching for the words “sell,” “share,” and “third parties” in a privacy policy will change your understanding of a service forever.
A person was about to download a new mobile game. Before she did, she found its privacy policy online. She used the “Find” feature on her browser and searched for the word “sell.” She found a sentence that said, “We may sell your personal information to our marketing partners.” She immediately knew that the “free” game was not free at all. This one small action of searching for a few keywords gave her a crystal clear understanding of the company’s business model.
The reason you’re surprised by how a company uses your data is because you agreed to it in the privacy policy you didn’t read.
A user was outraged when he discovered that a social media company was using his photos to train their facial recognition AI. He wanted to sue them. His lawyer pointed to a section in the privacy policy he had agreed to, which explicitly granted the company a license to use his content for “research and product development.” He was surprised and angry, but he had no legal standing because he had contractually agreed to it.
If you’re still clicking “I Agree” without a second thought, you’re losing your digital rights.
Every time you click “I Agree” to a privacy policy, you are signing a contract. You are often agreeing to let a company track your location, sell your data, and use your content in ways you would likely find objectionable if you knew about them. By clicking without reading or understanding, you are slowly but surely signing away your rights to privacy and data ownership. In the digital world, these lengthy, unreadable documents have become the primary mechanism through which our rights are eroded.