====== BPMN Model Validation Survey ====== **Wikipedia Dispute Resolution — Capstone Team Expert Review** This survey is part of the [[wikipedia_dispute_resolution|Wikipedia Dispute Resolution]] research project. The capstone team has produced a series of [[wikipedia_dispute_resolution#detailed_process_models|BPMN process models]] of Wikipedia's dispute resolution system, generated programmatically from case metadata (parties, votes, dates, principles, findings, and remedies). We need your expert feedback to validate these models against how the process actually works in practice. Please review the BPMN diagrams on the [[wikipedia_dispute_resolution|main project page]] before answering, and respond to all questions. Your answers are saved anonymously and can only be downloaded by administrators as a CSV. > **How to use this survey:** Open the [[wikipedia_dispute_resolution|Wikipedia Dispute Resolution page]] in a separate tab so you can refer to the BPMN diagrams while answering. The overview diagram shows the full escalation flow; the detailed models for each stage are linked under the "Detailed Process Models" section. ---- {{:arb_aggregate_workflow.png?direct&400 |}} q1_completeness: q: "Q1 — Overall Completeness: Does the overall diagram capture the full process from start to finish?" t: single a: - "Yes, it appears complete" - "Mostly complete, with minor gaps" - "Partially complete — significant steps are missing" - "No, it is substantially incomplete" q1a_missing_steps: q: "Q1a — Are there any steps that happen in practice but are missing from this diagram? (Leave blank if none)" t: text q1b_missing_decision_points: q: "Q1b — Are there any decision points where the process branches that are not shown here? (Leave blank if none)" t: text q2_flow_order: q: "Q2 — Flow Order: Does the left-to-right order of steps match how cases actually progress?" t: single a: - "Yes, the sequence is accurate" - "Mostly accurate, with minor ordering issues" - "Some steps are in the wrong order" - "The overall ordering does not reflect reality" q2a_concurrent_steps: q: "Q2a — Are there steps shown in sequence that can actually happen concurrently or in either order? If so, which ones?" t: text q2b_loops: q: "Q2b — Are there points where a case can loop back to an earlier step (e.g., a decision gets revisited, a phase reopens) that the diagram does not show? If so, where?" t: text q3_gateways_outcomes: q: "Q3 — Decision Gateways and End Events: Do the branching conditions and outcome labels on the gateways (diamonds) and end events (circles) match reality?" t: single a: - "Yes, they accurately reflect real conditions and outcomes" - "Mostly accurate, with minor labelling issues" - "Some labels or conditions are inaccurate or missing" - "The gateways and outcomes are significantly wrong" q3a_outcome_buckets: q: "Q3a — Are the outcome categories (e.g., Remedies Imposed, Declined, Resolved, No Consensus) the right buckets, or would you group them differently? Please describe any changes you would make." t: text q3b_missing_outcomes: q: "Q3b — Are there outcomes that actually occur but are not represented as an end state in the diagram? If so, what are they?" t: text q3c_gateway_labels: q: "Q3c — For each gateway, are the labels on the outgoing arrows (e.g., Accepted / Declined) the right conditions, or are there other paths a case can take from that point? Please note any incorrect or missing arrow labels." t: text q4_swimlanes: q: "Q4 — Swimlane Attribution: Is each task placed in the correct lane (attributed to the right actor)?" t: single a: - "Yes, all tasks are correctly attributed" - "Mostly correct, with a few misattributions" - "Several tasks are in the wrong lane" - "The lane structure does not reflect how roles work in practice" q4a_wrong_lane: q: "Q4a — Are there specific tasks placed in the wrong lane? If so, which tasks should be moved, and to which actor?" t: text q4b_missing_lanes: q: "Q4b — Are there roles or actors that do not have their own lane but should (e.g., bots, specific committee roles, uninvolved editors)? If so, which roles are missing?" t: text q5_metadata: q: "Q5 — Data Extraction Quality: The model was generated from programmatically extracted metadata (parties, votes, dates, principles, findings, remedies). Overall, how well does this extracted data represent the actual process?" t: single a: - "Well — the extracted data gives a reliable picture" - "Partially — some important dimensions are missing or distorted" - "Poorly — the extracted data is substantially misleading" - "Unable to assess" q5a_missing_data: q: "Q5a — What key data points are we failing to capture that would be important for a complete picture of the process?" t: text q5b_misleading_data: q: "Q5b — Are any of the data points we did extract misleading or unreliable as parsed (e.g., vote counts that do not mean what we think they mean)? If so, please describe." t: text q_role: q: "About you — What best describes your relationship to Wikipedia's dispute resolution processes? (Optional — helps us interpret your feedback)" t: single a: - "Active Wikipedia editor with DR experience" - "ArbCom member (current or former)" - "DRN or Mediation Committee volunteer" - "Administrator with DR involvement" - "Researcher / outside observer" - "Prefer not to say" q_other: q: "Any other comments or observations about the BPMN models not covered above? (Optional)" t: text ---- ===== About This Survey ===== This survey is part of the [[wikipedia_dispute_resolution|Wikipedia Dispute Resolution]] capstone project. The BPMN models it references were generated programmatically and are documented on the main project page alongside process component descriptions, key actors, and policy cross-references. **Useful links for reviewers:** * [[wikipedia_dispute_resolution|Main project page — overview diagram and detailed models]] * [[wikipedia_dispute_resolution#detailed_process_models|Detailed process models by stage]] * [[wikipedia_dispute_resolution#actors|Actor descriptions]] * [[wikipedia_dispute_resolution#key_policies|Key policies referenced in the models]] Results will be exported as CSV for analysis by the capstone team. Thank you for your time.