Cyber Crime with Confusion matrix.
Cyber-crime incident data occurring in India were classified using machine-learning techniques. The model, which predicted crimes with 99% of accuracy, reduced amount of time spent in analysis and manual reporting.
Cyber-attack method and perpetrator prediction using machine learning algorithms
Cyber-attacks have become one of the biggest problems of the world. They cause serious financial damages to countries and people every day. The increase in cyber-attacks also brings along cyber-crime. The key factors in the fight against crime and criminals are identifying the perpetrators of cyber-crime and understanding the methods of attack. Detecting and avoiding cyber-attacks are difficult tasks. However, researchers have recently been solving these problems by developing security models and making predictions through artificial intelligence methods. A high number of methods of crime prediction are available in the literature.
On the other hand, they suffer from a deficiency in predicting cyber-crime and cyber-attack methods. This problem can be tackled by identifying an attack and the perpetrator of such attack, using actual data. The data include the type of crime, gender of perpetrator, damage and methods of attack. The data can be acquired from the applications of the persons who were exposed to cyber-attacks to the forensic units. In this paper, they analyze cyber-crimes in two different models with machine-learning methods and predict the effect of the defined features on the detection of the cyber-attack method and the perpetrator. They used eight machine-learning methods in our approach and concluded that their accuracy ratios were close.
The Support Vector Machine Linear was found out to be the most successful in the cyber-attack method, with an accuracy rate of 95.02%. In the first model, we could predict the types of attacks that the victims were likely to be exposed to with a high accuracy. The Logistic Regression was the leading method in detecting attackers with an accuracy rate of 65.42%. In the second model, we predicted whether the perpetrators could be identified by comparing their characteristics. Our results have revealed that the probability of cyber-attack decreases as the education and income level of victim increases. We believe that cyber-crime units will use the proposed model. It will also facilitate the detection of cyber-attacks and make the fight against these attacks easier and more effective.
The Confusion Matrix Dashboard allows us to experiment with two different kinds of interactive synthetic distributions. One synthetic distribution defines Positive and Negative outcomes to fall in bumps, or “bell curves”. We can play with the height, width, and locations of the Positive and Negative outcome bumps. A second kind of distribution places the Positive and Negative outcomes more uniformly along the prediction score axis. We can play with the rise and fall of these distributions as score increases.
Example: Apple Snacks
Here is a realistic made up example. The example starts simple, but it builds to allow to us evaluate concepts of algorithmic bias and fairness.
Suppose We are packaging snack boxes for 500 kids for a school picnic. Some of the kids like apples, while others prefer something else like popcorn, a cracker, or cookie. We have two kinds of snack box, one with an apple, and one with something else. We must decide in advance which kind of box to give to each kid, which we will then label with their name and hand out to them. When each kid opens their snack box, they will either be happy with their snack and say “Yay!”, or else they will be disappointed and say “Awwww”.
To help us decide, we predict which kind of snack each kid likes based on some rules of thumb. Older kids tend to like apples while younger kids do not. Taller kids like apples while shorter kids don’t. Kids in Mr. Applebaum’s class tender to prefer apples, while kids in Ms. Popcorn’s class want something else. There are no hard and fast rules, just educated guesses.
For each kid, we give them a point score indicating the prediction that they will want an apple. A score of 10 means you are quite sure they’ll want an apple, for example a taller 10-year-old in Mr. Applebaum’s class. A score of 1 means you’re confident they will not want an apple, like a shorter 6-year old in Ms. Popcorn’s class.
For each kid, after calculating their prediction score, we make a decision. We might set the decision at the mid-point score of 5. Or, we might set the apple snack threshold higher or lower. For example, if it’s important that the kids eat fruit, you’ll set the threshold lower so that you’ll catch more kids who prefer apples. Or, if we want to err on the side of having fewer apple slices discarded in the trash, then we’ll set the threshold higher, so fewer apple snacks are handed out. In other words, our decision depends on the tradeoffs for different kinds of errors.
At the picnic, we record kids’ reactions. We write down what prediction score you gave them, whether you gave them an apple or not based on the decision threshold, and what their reaction is. This is the payoff for your decision.
We tally outcomes in a table with rows corresponding to kids’ preferences and columns corresponding to your decision to give each one an apple snack or other snack. The Confusion Matrix counts the numbers and ratios for each quadrant of the table.
The raw counts table is in the upper left, while the ratio table is on the right.