This table illustrates CRs we manually labeled for evaluation. Totally, we manually labeled 25 SR-CRs and 25 non-SR-CRs. For the sake of simplicity, we ignore those non-SR-CRs that our CREEK correctly predicted.
Here, we list the indicators helping us to manually label 25 SR-CRs as a supplement to demonstrate the diversity of SR-CRs. Notice that, using only these indicators to figure out SR-CRs will bring in plenty of false positives, as they just provide a clue showing the potential of an SR-CR.
id-1: expert's knowledge;
id-2: expert's knowledge;
id-3: expert's knowledge;
id-4: ``... ciphering may fail ...'';
id-5: ``... at risk of eavesdropping ...'';
id-6: ``... security threat ...'';
id-7: ``... session keys ... compromised ... if ... with no encryption ...'';
id-8: ``... attacker...'';
id-9: ``... authentication ... will fail ...'';
id-10: ``... sensible information may be revealed ...'';
id-11: ``... skipping of authentication for malicious UE ...'';
id-12: ``... undermine security guarantees ...'';
id-13: ``... integrity protection ... encryption algorithm ...'';
id-14: ``... leakage of access token ...'';
id-15: ``... DDOS attack ...'';
id-16: ``... fatal security problem remains ...'';
id-17: expert's knowledge;
id-18: ``... security key ...'';
id-19: ``... malicious ...'';
id-20: ``... rogue UE ...'';
id-21: expert's knowledge;
id-22: ``... violation ... user privacy ...'';
id-23: ``... security aspects are not aligned ...'';
id-24: expert's knowledge;
id-25: expert's knowledge;