Anti-Phishing, DMARC , Artificial Intelligence & Machine Learning , Cybercrime

JPMorgan Chase Develops 'Early Warning System'

Researchers Detail use of Machine Learning to Find Phishing URLs
JPMorgan Chase Develops 'Early Warning System'

Researchers at JPMorgan Chase have developed what they call a novel "early warning" security system that uses artificial intelligence to detect malware, Trojans and other advanced persistent threats before bad actors even start using phishing emails to target employees, according to a new research paper published earlier in July.

See Also: Strengthening Operational Resilience in Financial Services

The AI system uses a combination of big data and deep learning algorithms for early-stage detection of phishing email and other malicious payloads that target bank employees, according to the research paper, which is written by five researchers at JPMorgan Chase.

In the paper, the researchers describe the AI-based system as particularly useful in identifying mass phishing campaigns created through domain-generated algorithms and identifying malicious URLs by comparing them with certain pre-defined characteristics such as traffic patterns, jumbled URLs and spelling mistakes, which are typically an indication of suspicious actors.

In addition, the system is also equipped to provide real-time feedback on new domains registrations and activities, the researchers report.

"This AI-based early warning and multi-stage system, detects malicious Trojan activities from internal and external sources, through the lifecycle of the banking botnets, even ahead of the actual spear-phishing campaign," the researchers note in the paper.

Although it remains unclear whether the financial giant has already deployed this particular system that the researchers describe, JPMorgan Chase has been working toward strengthening its cybersecurity using advanced technologies such as AI and machine learning. In an interview with CNBC, CEO Jamie Dimon says that the company spends almost $600 million annually on its security infrastructure.

A spokesperson for JPMorgan Chase did not respond to emails seeing additional comment about the research paper.

In its latest annual report, Dimon noted that AI will play a crucial role in stopping risk and fraud for the company and added that the technology is expected to drive $150 million of annual benefits for the company.

"Machine learning is helping to deliver a better customer experience while also prioritizing safety at the point of sale, where fraud losses have been reduced significantly, with automated decisions on transactions made in milliseconds," Dimon notes in the bank's annual report.

Training the Data Sets

The AI system described by the research is composed of three specific aspects, including the main data lake that acts as a common platform for collection and detection of data regarding botnets that indicate the start of an attack. This is followed by a multistage detection for malicious actors.

Lastly, to verify the inputs from the data sets and the detection technique, the AI algorithms are classified into two main parts for identifying advanced threats, which use sophisticated tools to evade its detection.

By bifurcating the AI models into domain generation algorithms and spear-phishing classifications, the system predicts whether a certain URL or domain corresponds to phishing or domain campaigns, according to the paper.

By training the system with publicly available data sets based on phishing URLs, the researchers say that the new system will develop a deep learning-based detection and identification system for identifying malware threats that surpasses the traditional security filtering process, with the goal of detecting threats before the threat actors start sending out phishing emails that target employees.

The primary source for the training data is composed of millions of malicious domains, and phishing links as well as non-phishing URLs from sites such as Alexa and DMOZ Open Directory Project, which act as a content repository for website links, and Phishtank, a website of verifying phishing URL, states the paper.

Further, with the help of natural language processing, the early warning system looks for patterns such as mismatched URL, poor spelling and grammar and URLs that ask for personal credentials.

All this is designed to alert the security staff as malicious actors are gearing up to start a campaign by sending spear phishing emails to employees to deliver Trojans and other malware. The researchers note in their paper that it takes on average about 101 days to compromise a network using a Trojan and that this system should give security teams enough advanced warning that threat actors are taking the initial steps to begin an attack.

Breach History at JP Morgan

The ability use new technologies such as AI and machine learning to detect attack and anomalies is something that the powerhouse bank has invested more in since its own well-publicized data breach of five years ago.

J Morgan Chase, which is based in New York, sustained a major data breach in 2014, which affected 76 million households and 7 million small-scale businesses, according to published reports.

The breach was considered to be one of the largest of the time, where attackers stole information including names, addresses, phone numbers and e-mail addresses from the customers who used the company's services such as, JPMorganOnline, Chase Mobile and JPMorgan Mobile (see: Chase Breach Affects 76 Million Households ).

The attack was allegedly carried out by Russian hackers who used a spear-phishing campaign to gain the access by comprising its network, The Wall Street Journal noted at the time.

Is AI the Way Forward?

Despite AI and machine learning technologies emerging as a potent weapon to counter cybercrime, most of their use at enterprise-level is limited to automation right for now, according to experts.

Some of these same experts believe, however, that despite the huge cost involved in the adoption of the technology, AI is the future of tackling and countering cybercrime and other illegal activity.

Richard Absalom, a senior research analyst at the London-based Information Security Forum, believes that by automating certain processes such as asset identification and management, as well as malware and vulnerability detection, AI can reduce the burden on human security practitioners and help to address the technical skills gap in the industry.

"By learning what normal looks like and reacting autonomously to abnormal activity, AI systems can protect even against never-before-seen, zero-day threats," Absalom says.

AI coupled with advanced technologies such as machine learning and behavior modelling can stop potential new attack vector, says Shahrokh Shahidzadeh, CEO of Acceptto, a Portland-based network security provider.

"In theory, systems can be tricked by mimicking a legitimate online-appearance and satisfying all criteria that the AI has learned but when you pair it with another technology, say machine learning. This now creates models of human behavior that is much harder, and less likely, to be mimicked," Shahidzadeh explains.

About the Author

Akshaya Asokan

Akshaya Asokan

Senior Correspondent, ISMG

Asokan is a U.K.-based senior correspondent for Information Security Media Group's global news desk. She previously worked with IDG and other publications, reporting on developments in technology, minority rights and education.

Around the Network

Our website uses cookies. Cookies enable us to provide the best experience possible and help us understand how visitors use our website. By browsing, you agree to our use of cookies.