Challenges
A
-
Airbus
- Challenge number 2642
How well does machine learning anomaly detection work in practice in cyber security?
Whilst a lot of academic literature has been produced in detecting anomalies in the cyber security domain (e.g. from network traffic), how well do the methods or systems work in the 'real world'? This is an important topic since anomaly detection in cyber security poses several challenges that are not all found in other applications of machine learning. E.g. the threat of adversaries, the problems of large amounts of false positives or false negatives, lack of training data, lack of evaluation methods.
- Challenge number 2642
B
-
BT
- Challenge number 2667
How can we ensure that systems utilising AI for novel threat (anomaly) detection do not produce high numbers of false positives requiring human analysis, and how can the sensemaking and triage process of such alerts be improved?
The last thing security operations want is another tool alerting them to things and taking more of their time to understand. Unsupervised anomaly detection is likely to be rife with false positives (suspicious but not malicious) but without human input is unlikely to improve.
- Challenge number 2673
To what extent is it feasible to develop automated responses to detected (potential) cyber threats?
Particularly when thinking about AI-driven attacks, rapid real-time responses are likely to be required. This thought it around going beyond simple systems of response automation (e.g. rule-based) and could include optimising novel defensive strategies, action with uncertainty, continuous adaptation and learning etc.
- Challenge number 2667
F
-
Fortinet
- Challenge number 2718
How can security organisations, universities and security vendors share and collaborate on threat intelligence in a more efficient manner?
Organisations are continually on the back foot when it comes to threat intelligence, collaboration between organisations is limited. We would be keen to discuss how this could be improved to allow organisations to start getting a foothold in protecting themselves against known and unknown threats.
- Challenge number 2721
How can security vendors assist with improving security awareness within organisations and students?
Training is a critical problem across all sectors, employee / student training is critical as a first line defence against phishing attacks, click bait etc. We would be keen to discuss how we, as a group, could better educate all parts of society.
- Challenge number 2718
-
Fujitsu Laboratories of Europe
- Challenge number 2643
How do we reduce alert fatigue/burn-out in a managed security services provider?
We have a security operations centre, where companies outsource their network security to us. As a result of monitoring, rules are very generalised which can create a lot of “noise” where analysts end up with many alerts that can’t be actioned due to service restrictions. This may lead to a burn-out where analysts are more likely to class a ticket as “noise”, which may result in some potentially legitimate alerts being missed. We would like to address this on a technical level and find a balance between convenience and confidentiality – understanding that tailoring alerts per customer would address this, however this is not always possible.
- Challenge number 2643
H
-
HORIBA MIRA
- Challenge number 2625
What is the best way to visualise cybersecurity for a layperson?
Autonomous vehicles are going to need to interact with the general public. The security of these vehicles is paramount, however, the public needs to know how and why (from a security perspective) the machine is acting in some way, and to instruct intervention if necessary. What is the best visual way to get this across?
- Challenge number 2625
S
-
Scottish Business Resilience Centre
- Challenge number 2715
In this increasing age of inter connectivity, AI, robotics and autonomous vehicles many communities feel vulnerable in the virtual space. How can multi sectoral organisations and agencies improve this position?
-
- Challenge number 2715
T
-
TechForce Ltd
- Challenge number 2640
How will AI "deep fakes" cause problems in authentication methods that rely on biometrics like voice or face recognition, and how can it be prevented?
Services like telephone banking and insurance claims are already using some of the latest advancements in voice technology to identify customers; meanwhile voice activated assistants are building up large libraries of recordings. An engineering company was recently scammed with a deep fake of the CEO's voice. What is the likely impact of AI used like this for social engineering and fraud, and what mitigations are possible?
- Challenge number 2640