With AI use and discussion growing rapidly, ethics has been increasingly brought up as a genuine concern in some cases such as Human Resources (HR) - or any scenario where AI is taking over the decision process when selecting or judging human beings. One major talking point is bias and/or discrimination that might be present in AI systems e.g. when selecting a candidate for a new role does AI automatically disregard candidates with a certain education or background? Here I will discuss where this bias comes from and why it isn't AI that we should be concerned with.
Anomaly detection using AI is a powerful and efficient way of spotting anything unusual in any process, data or system - it's easy to do using an Artificial Neural Network such as eldr.ai, it's fast and it's accurate.
We've put in NLP functionality meaning that eldr AI can now learn from full text, sentences, words etc, at the same time learning from values, categories and continuous data in the same Artificial Neural Network.