Shared Task Details

Shared Task Objectives


The shared task aims to serve as a collaborative platform where participants propose guidelines and diverse methods for annotating and analyzing the dataset. There will be two subtasks of focus:


7 possible Labels:


 

Bias Annotation Examples to illustrate the 7 Labels.


 

Example: "In the ongoing Israel-Palestine conflict, recent events have escalated tensions. Yesterday, Israeli forces conducted operations in response to rocket attacks from Gaza. Both sides have reported casualties. International leaders are calling for restraint and a return to peace talks."

 

 

Example: "Once again, Palestinian aggression has disrupted peace in the region. Palestinian extremists, ignoring efforts for peace, launched unprovoked attacks on innocent Israeli civilians. Israel's response, though portrayed as harsh by some, is a justified measure to protect its citizens."

 

 

Example: "In a typical display of excessive force, Israeli troops have yet again targeted Palestinian areas, causing numerous civilian casualties. This aggression, under the guise of self-defense, highlights the ongoing oppressive tactics Israel employs against Palestinians."

 

 

Example: "In the latest chapter of their endless and futile conflict, Israeli and Palestinian forces have once again engaged in senseless violence. Both sides continue to commit atrocities, showing a complete disregard for peace or human life."

 

 

Example: "In the shadow of the Israel-Palestine conflict, external actors, particularly Iran, are exacerbating tensions. Iran's covert support for extremist groups shows its intent to destabilize the region, disregarding the catastrophic impact on both Israeli and Palestinian civilians."

 

 

Example: "Recent developments in the Middle East have seen an increase in hostilities. The situation in the region is complex, with various factors contributing to the current state of affairs. The international community remains divided on the issue."

 

 

Example: "In other news, the annual technology conference in Tel Aviv has unveiled groundbreaking advancements in cybersecurity. Industry leaders from around the globe gathered to showcase innovations that promise to shape the future of digital security."




4 possible Labels:



Propaganda Annotation Examples to illustrate the 4 possible categories.


Example: "In a display of unmatched heroism, our troops have once again safeguarded our nation from the brink of destruction, heroically neutralizing the threat from Gaza, which aims to undermine our very existence."

Example: "Yesterday, an escalation occurred along the Israel-Gaza border, resulting in casualties on both sides. Israeli and Palestinian officials provided conflicting accounts of the events that led to the confrontation."

Example: "The situation in Gaza remains tense, with reports of civilian distress and military movements. While some sources claim the military actions are defensive, others argue they are provocative, leaving the true nature of the situation open to interpretation."

Example: "A feature on Gaza's cultural scene highlights the resilience of its art community, showcasing how local artists use their craft to express hope and endurance amid challenging circumstances, without delving into the political context."



For each subtask, there will be three evaluation tracks for which winners will be crowned.


Guidelines Track:  The teams have the freedom to design their own guidelines and apply them to the shared data. The following is the checklist of all items that will be evaluated by the organizers.


 The teams must provide well-documented annotation guidelines including examples, and must provide inter-annotator agreement (IAA) numbers for at least 200 posts (40 from each language) from Batch 1 and Batch 2.  We expect the IAA to be competitive (e.g. Cohen Kappa of 0.6+) in the target space. The best guidelines will be selected by the organizers.  Furthermore, Guidelines and IAA are necessary conditions to participate in the Quantity and Consistency Tracks (see below).  


Quantity Track: The teams can compete in the number of annotated data batches. They must finish them in order and complete a batch before moving to the next.  The teams with the highest number of completed batches will be crowned the Quantity track winners for the subtask of choice. Please note that correlation measure will also be measured for the quantity track and will be used during the evaluation.


Consistency Track: The various teams in the same subtask and shared completed batches will be compared for correlation against each other. The teams that have the highest correlation against other teams (centroidal choices) will be crowned winner. This needs a minimum of three teams per subtask. 


Publication: All teams participating in the Shared Task are invited to submit short paper (4 pages) descriptions of their efforts. These papers will be evaluated by multiple reviewers to be selected for publication in the Arabic NLP 2024 Conference Proceedings and indexed by the ACL Anthology.


Collaborative Commitment: Participants are encouraged to join the shared task with a commitment to collaboration. Whether working independently or within teams, every effort and insight contributed should be shared openly. This collaborative ethos extends beyond individual tasks and includes sharing methodologies, findings, and results.


Optional Demographic Details: We would like to invite participants to provide some demographic details voluntarily. This information includes aspects such as age range, native language, educational background, area of study or expertise, gender, and region of origin. Please note that providing this demographic information is entirely optional and will not influence the evaluation of your participation in any way. We respect your privacy and understand if you choose not to share these details.