This is an HTML version of an attachment to the Freedom of Information request 'Quick Wins for Busy Analysts'.


QW_V1-5_START54_opening pages  14/4/16  12:39  Page 1
PROFESSIONAL HEAD OF DEFENCE INTELLIGENCE ANALYSIS
QUICK WINS 
FOR BUSY ANALYSTS


QW_V1-5_START54_opening pages  14/4/16  12:39  Page 2
FOREWORD
This handbook has been prepared by my Futures and Analytical Methods
Team as a guide to help busy analysts tackle intelligence questions using
simple analytical approaches or techniques.
All the approaches inside can be used quickly by the individual analyst or
with a small group of colleagues. Analysts often lack the time and resources
to carry out in full some of the more demanding analytical techniques taught
on DI training courses, but still need a structured approach to producing
intelligence assessments. This guide seeks to address the gap. It does not
pretend to be exhaustive, but I view it as an important addition to the 
analytical toolbox. The guidance provided is based on DI FAM’s research
and its practical experience in helping DI analysts tackle a range of intelli-
gence questions using structured analytical approaches and techniques.
For further details regarding the analytical approaches in this handbook or
additional methodologies, please contact the DI FAM team. Contact details
are provided at the end of this guide.
Paul Rimmer
Deputy Chief of 
Defence Intelligence

QW_V1-5_START52  17/3/16  16:03  Page 1
Start
REFINING THE  QUESTION  Here
The role of Defence Intelligence is to reduce decision makers’ uncertainty so that the 
optimal courses of action are more likely to be taken, or policies adopted. Understanding
the customer’s information requirement, and helping them frame their questions, are some
of the most important characteristics of a good analyst. If the question, as understood by
the analyst, does not reflect the genuine information requirements of the customer then
analytical resources might be wasted and the customer is likely to be disappointed.

Use the checklist below to guide your discussions with customers when framing
a question prior to undertaking analysis.

Go to the next section of this guide once you are clear about the question.
STAGE
KEY CONSIDERATIONS
• Is it clear what would constitute an answer to the question?
• Is it obvious what you would need to know to provide an
1
answer (even if finding it out would be difficult)?
Clarification
If YES then go to stage 2.
If NO then clarify the precise definitions of the terms in the 
question with the customer.
Based on your engagement with the customer, why exactly are
they interested?
• Are they really interested in something else, which they have
assumed your question will answer?
• If the question they have asked is closed (a ‘yes / no’ question):
˚
Would they probably be disappointed with an answer of ‘yes’ or ‘no’?
2
˚
Would you want to answer it ‘yes, but...’ or ‘no, but...’
Widening
• Are there any hidden assumptions lying behind the question?
If NO to all then go to stage 3.
If YES to any then reframe the question with the customer so it
covers the real object of interest, or is an appropriately open
question instead of a closed one.
• Does the customer’s decision depend on whether the answer
meets some threshold or set of thresholds, rather than the
precise answer you give?
3
If YES, reframe the question with the customer so it covers the
Focusing
narrower question of whether or not the threshold has been 
(or will be) met.
If NO, turn over.
Now turn
1
over

QW_V1-5_START52  17/3/16  16:03  Page 2
Next
CLASSIFYING THE  QUESTION
Assuming you have clarified the question, and confirmed that it is indeed asking for an
assessment that directly addresses issues on which the customer’s decision depends, the
next stage is to consider broadly what the fundamental characteristics of the question are.
• Closed questions can in theory be answered ‘yes’ or ‘no’ - they ask you whether
something is or will be the case.
• Open questions cannot be answered ‘yes’ or ‘no’, and typically start with ‘What’,
‘Who’, ‘Why’ etc.
• Present-focused questions are about things that could in theory be observed now.
• Future-focused questions are about things that have not yet happened, but might do.
Present
Future
Closed
‘Yes’/‘No’ questions
‘Yes’/‘No’ questions 
about what is 
about what could 
Question
happening now
or will happen
‘What’, ‘Which’, ‘When’,
‘What’, ‘Which’, ‘When’,
‘Who’, ‘Where’, ‘How’ 
‘Who’, ‘Where’, ‘How’
Open
or ‘Why’  questions 
or ‘Why’  questions 
Question
about what is 
about what could 
happening now
or will happen
Quantitative Questions
Questions asking ‘how many’, ‘when’, ‘how much’, and so on might require an approach
designed to tackle quantitative questions. These approaches are outside the scope of this
booklet, but advice and guidance can be obtained from the FAM team - contact details are
at the back of this book.
2








QW_V1-5_START52  17/3/16  16:03  Page 3
Finally
SELECTING THE TECHNIQUE
OR APPROACH
Use the diagram below to navigate your way to the appropriate section of this guide. Each
section contains a short description of one category of technique, what it is useful for, and
also some information on a few simple techniques and approaches - and how to use them -
within that category.
Open Present
‘What’, ‘Which’, ‘When’, ‘Who’, ‘Where’,
Hypothesis Generation
‘How’ or ‘Why’ questions about what is
Techniques
happening now
Page 9
Hypothesis Testing
Techniques
y
or
Closed Present
‘Yes’/‘No’ questions about 
what is happening now
Page 15
Data Organisation
Page 37
Closed Future
‘Yes’/‘No’ questions about 
what could or will happen
Question Categ
Scenario Evaluation
Techniques
Open Future
‘What’, ‘Which’, ‘When’, ‘Who’, ‘Where’,
Scenario Generation
‘How’ or ‘Why’ questions about what
Techniques
could or will happen
Page 33
Page 23
Depending on question
3

QW_V1-5_START56  18/4/16  10:21  Page 4
Still need help ?
WHEN SHOULD I 
COME AND SEE FAM ?
I’m struggling
Have I applied
I don’t know
I want to run
I have applied
to understand
a technique
how to tackle
an analytic
a novel
a technique
correctly ?
my question
event
technique
Have you
Yes
refined the
question ?
Check out
Quick
No
Wins
Engage
Have you
No
with your
selected a
customer
technique ?
Yes
I’m
Check out
still
Quick
struggling
Wins
Check out the 
Does the
Guidance
technique call
Notes
for creative or
critical thinking ?
I’m
Critical
Creative
still
Thinking
Thinking
struggling
Technique
Technique
Come 
Do you need
Do you want to
Yes
and see
help applying
host an event ?
FAM
the technique ?
(Ideally 2 weeks
minimum notice)
Consultative
FAM can help
Collaborative
you organise
your event
Comprehensive
4

QW_V1-5_START52  17/3/16  16:03  Page 5
This guidance aims to provide analysts with a clear framework for:

aligning an intelligence question as closely as possible with the customer’s 
information requirement;

understanding the question you are being asked;

choosing the right approach to help you answer your question;

communicating uncertainty effectively.
Quick Wins aims to demonstrate that structured analysis does not have to be time- or
resource-intensive. Simple, pared-down techniques and approaches to tackle clearly defined
intelligence questions can aid quick thinking and provide a framework for rapidly producing
clear and robust assessments.They also provide structures for presenting assessments 
effectively to customers and seniors.
Structured analysis can benefit analysts in a variety of other ways, including:

providing an intellectual audit trail allowing analysts and others to understand
how an assessment was reached and if necessary, in future, review it easily;

making analysis more persuasive to customers and stakeholders;

encouraging creativity;

identifying and questioning assumptions;

identifying discriminating evidence;

identifying denial and deception;

managing complexity; and

avoiding cognitive biases
The main sections contained in this modular guide are outlined below. Use the schema on
pages 1 – 3 to help you clarify and classify your question and select a suitable technique 
category. Each technique listed in this book has four main headings:

What is it ?  

Other techniques and approaches to use with it;

Level of effort required; and

How to do it.
Page 55 provides guidance on communicating uncertainty effectively in intelligence assessments
(e.g. by using the ‘Uncertainty Yardstick).
Page 61 gives some guidance on using analytical techniques to capture expert judgement in a
variety of ways.
Finally, at the back of this book you will find details of training courses and contact details
for the Defence Intelligence Futures and Analytical Methods Team (DI FAM).
5

QW_V1-5_START52  17/3/16  16:03  Page 6
This page is intentionally blank
6

QW_V1-5_START52  17/3/16  16:03  Page 7
COMMON BIASES
Cognitive Trap
Description
Mitigate using...
Brainstorming  (page 12) 
The tendency only to consider
Environmental scanning  (page 10) 
Confirmation bias
hypotheses that you already
Analysis of Competing 
believe are true.
Hypotheses (page 16) 
The tendency to place undue
Key Assumptions Check  (page 20) 
Anchoring effect
weight on the first piece of infor-
Analysis of Competing
mation you come across.
Hypotheses (page 16) 
The tendency for analysts to con-
form to the views and positions
Key Assumptions Check  (page 20) 
Groupthink
of the group to which they
‘Breaking the Mirror’  (page 24) 
belong.
The tendency to underestimate
how surprising past events were,
Hindsight bias
which makes future shocks seem
Cone of Plausibility  (page 28) 
less plausible than they actually
are.
The unconscious tendency to dis-
Any scenario- or 
Failure of 
miss or ignore unlikely-sounding
hypothesis-generation 
scenarios without considering
imagination
technique  (pages 23 and 9) 
them in detail.
The tendency to focus on scenar-
ios which are particularly salient
Availability and 
Any scenario-generation technique
or similar to recent events
(page 23) 
recency effects
rather than examining alterna-
tives.
The tendency to underestimate
the differences in the beliefs and
objectives of foreign protagonists,
Mirror-imaging
‘Breaking the Mirror’  (page 24)
which tends to lead to the belief
that others will act in much the
same way that we would.
7

QW_V1-5_START52  17/3/16  16:03  Page 8
CONTENTS
CONTENTS
PAGE
1.
Refining the Question ..................................................................1
2.
Classifying the Question ..............................................................2
3.
Selecting the Technique ...............................................................3
4.
Hypothesis Generation: ...............................................................9
Environmental Scanning 
Structured brainstorming
5.
Hypothesis Testing......................................................................15
Quick Analysis of Competing Hypotheses;
Key Assumptions  Check
6.
Scenario Generation ..................................................................23
‘Breaking the Mirror’ (or Red Teaming at the desk)
Cone of Plausibility/Driver Identification
7.
Scenario Evaluation ...................................................................33
Backcasting-Light/Quick I&W
Key Assumptions  Check
8.
Data Organisation ......................................................................37
Environmental scanning categories; SWOT analysis
Mind maps; Chronologies and Timelines; Matrices
Voting and Ranking; Filtering
9.
Communicating Uncertainty ....................................................55
The problem of conveying uncertainty
Conveying uncertainty effectively:The Uncertainty yardstick
Conveying uncertainty effectively: Other issues
10.
Capturing Expert Judgement....................................................61
Analytical Events; Presentation-discussion; Offline
11.
Contact Details ..................................................Inside back page
8

QW_V1-5_START52  17/3/16  16:03  Page 9
HYPOTHESIS GENERATION
HYPOTHESIS GENERATION
“There should be no combination of events for which the wit of man can-
not conceive an explanation” 
Sherlock Holmes (Arthur Conan Doyle)
Hypothesis generation is applicable to open, present-focused questions, such as:
What are the motivations behind RED’s nuclear programme?
Why has RED stopped providing assistance to insurgent groups based in GREEN?
These questions ask you to generate a range of potential facts about the world – or
hypotheses – that might have a bearing on the subject of interest.
Because there are always more possible hypotheses than data, there are no approaches that
can guarantee that you will identify the ‘true’ hypothesis among those you generate.
Nevertheless, structured hypothesis generation is designed to stimulate your imagination to
make it more likely that the right hypothesis will be among your candidates. It encourages
you to look at a wider range of possibilities, and, in a group, to prompt one another’s think-
ing with new ideas.
You don’t have to generate hypotheses in a structured way, but if you don’t you run the risk
of falling into various cognitive traps, including:

confirmation bias (the tendency only to consider hypotheses that you already
believe are true); and 

the availability heuristic (the tendency to think of ideas that are particularly
salient, rather than likely).
9



QW_V1-5_START52  17/3/16  16:03  Page 10
HYPOTHESIS GENERATION
ENVIRONMENTAL SCANNING
10 mins
What is it? 
Environmental scanning is the use of simple mnemonics to widen the scope of your thinking
and generate a range of explanations (hypotheses) for developments. It is a way of forcing
yourself to think beyond explanations which seem intuitively obvious or which were the
first, or most salient, ideas that sprang to mind.
Environmental scanning, however, is almost universally useful in helping you generate ideas for
a variety of purposes. STEMPLES is particularly useful for examining defence-related questions
and stands for Social,Technological, Environmental, Military, Political, Legal, Economic and
Security. Other variations, however, may be more appropriate to the intelligence question at
hand.These include PESTLES, PEST or STEEP. In some cases, you may need to generate your
own bespoke set of categories to help you generate ideas relevant to the question.
Other techniques and approaches to use with it:

Mind mapping to prompt new ideas and link existing ones (see page 42).

Environmental scanning is useful for widening the scope of your thinking and
generating ideas, drivers, key assumptions etc during:
-
Structured Brainstorming sessions (see page 12);
-
Cone of Plausibility/Drivers Identification exercises (see page 28);
-
Backcasting-Light (see page 34);
-
I&W exercises
-
SWOT Analysis (see page 40)
-
Breaking the Mirror/Red Teaming exercises (see page 24)
Level of effort required
Using environmental scanning to generate hypotheses is an analyst’s at desk ‘quick and dirty’
alternative to a structured brainstorming and can take as little as 10 minutes. If possible,
however, hold a structured brainstorming with 6 – 8 participants (see ‘Event planning rules
of thumb’ on page 63 for more information on group size) to generate hypotheses as the
range of ideas is likely to be greater.
How to do it
Select a suitable mnemonic from above or generate a bespoke set of categories for your
1
question. Use your categories or headings to generate as many ideas as you can and write
them down.The table directly below provides some possible interpretations of/prompts for
the categories that you find relevant to the question you are tackling.
10

QW_V1-5_START52  17/3/16  16:03  Page 11
HYPOTHESIS GENERATION
ENVIRONMENTAL SCANNING (cont.)
POSSIBLE INTERPRET
CATEGORY
POSSIBLE INTERPRETA
ATIONS
TIONS
Social
Culture, attitudes/perceptions, education, population, health, welfare, corruption etc.
Developments, funding, access to technology, patents, licensing, IT, mobile phones,
Technological
infrastructure etc.
Environmental
Climate, impact of weather, natural disasters, natural resources, geography etc.
Capabilities, developments, doctrine, command and control, leadership, loyalty to 
Military
government etc.
Political
Leadership, political system, policies, pressure groups, elections, relations with other states etc.
Current and future legislation, regulatory processes, judicial system, international 
Legal
organisations (membership of), treaties (signatories to) etc.
Internal economy, trade, industry, agriculture, economic blocs (membership of), external
Economic
investment, aid, unemployment, interest rates, global markets etc.
Police and paramilitary forces, coast guard, terrorism, insurgent groups, criminal networks,
Security
private companies, reforms etc.
Once you have generated a broad range of hypotheses, review them to see whether any are
2
so similar that they can be combined and to check whether you have missed any.
Make a record of them and consider them periodically to keep an open mind about them,
3
rather than fixing on a hypothesis or hypotheses prematurely. Alternatively, you may need
to evaluate the hypotheses more formally and systematically against the information available
using a hypothesis testing approach, such as ACH. Bear in mind that in some cases (e.g. who
assassinated the president of RED?), you may be seeking just one explanation, whereas in
others there may be a range of interconnected (rather than mutually exclusive) hypotheses.
For example, you might use STEMPLES to tackle the question about RED’s motivations for
a nuclear weapons programme. See the example directly below.
CATEGORY
HYPOTHESIS
Social
None
RED’s technological base is advanced so nuclear weapons development can be done
Technological
easily and rapidly.
Environmental
RED has large quantities of the right kind of raw materials.
RED’s armed forces have demanded a nuclear capability to compensate for conventional
Military
military weakness.
RED’s leadership sees nuclear weapons as conveying prestige.
Political
RED’s leadership sees possession of nuclear weapons as a bargaining chip with the
international community.
Legal
RED is not party to treaties to deter it from developing nuclear weapons.
RED’s leadership has opted for nuclear weapons as they are cheaper than buying large
Economic
amounts of conventional weaponry.
RED’s leadership is concerned about its neighbours, GREEN and BLUE, which have
Security
nuclear weapons and are hostile.
11



QW_V1-5_START52  17/3/16  16:03  Page 12
HYPOTHESIS GENERATION
STRUCTURED BRAINSTORMING
<60 mins
What is it? 
Brainstorming is a widely used group creativity technique designed to generate a large number
of ideas and concepts to help solve a problem or tackle a challenge. It is a useful way of 
bringing analysts together to generate explanations or hypotheses to explain events. It is also
almost universally useful in helping them generate ideas for a variety of purposes in analytical
events (see below). It involves two stages: a divergent thinking stage to generate new ideas;
and a convergent thinking stage to organise them and, if appropriate, reduce them.
Other techniques and approaches to use with it:

A suitable environmental scanning approach, such as STEMPLES to widen the
scope of your thinking (see page 10).

Mind mapping to prompt new ideas and link existing ones (see page 42).

Structured brainstorming is useful for generating:
-
key drivers in Cone of Plausibility/Drivers Identification exercises 
(see page 28);
-
key assumptions in Backcasting-Light exercises (see page 34);
-
indicators for I&W exercises 

Voting and ranking to reduce the number of ideas for further explanation 
(see page 50).
Level of effort required
Assuming the question to be brainstormed has been clarified in advance, a group of 6 – 8 
analysts (see ‘Event planning rules of thumb’ on page 63 for more information on group size)
can complete a very simple brainstorming in under an hour, though additional activities at the
convergent stage (such as voting and ranking) will mean your brainstorming could last longer.
How to do it
1
A generic brainstorming workshop process is set out below. However, for generic guidance on
planning and running analytical events see ‘Analytical Events’ on page 63. The facilitator should
set at least one specific ground rule - ‘no deriding of ideas’ - at the start of the brainstorming
and ensure adherence to it.This is to help prevent the stifling of creativity and avoid group
think, anchoring and the authority fallacy. Essentially, the facilitator should try to ensure that
there is no ‘official’ analytic line.
2
The facilitator should: (i) give people a few minutes to think and write their ideas down 
silently on sticky notes (one idea per sticky note); and (ii) refer them to environmental 
scanning posters to help them think widely.
Asking participants for a maximum number of hypotheses (e.g. three or five) at the silent
brainstorming stage can help avoid facilitators being swamped with numerous ideas which
might contain duplication. Once the initial sticky notes with ideas have been dealt with as
described in the paragraph below the facilitator can ask participants for further ideas.
3
The facilitator (preferably with a helper) then starts collecting the sticky notes, reads them
out, and seeks clarification of them (if necessary) before placing them on a whiteboard or wall
where all can see them. They inevitably trigger further ideas which the participants should
12


QW_V1-5_START52  17/3/16  16:04  Page 13
HYPOTHESIS GENERATION
STRUCTURED BRAINSTORMING (cont.)
write on their sticky notes and hand to the facilitator. By this stage it is acceptable for 
participants to call out their ideas as they write them on sticky notes, but there should be 
no criticism or debate - these should be saved for the next stage. Wild ideas are acceptable 
in most brainstorming sessions. They keep things moving, stimulate deeper thinking and can
lead to other useful ideas.
When ideas wane, the convergent phase can begin.This involves reducing the ideas to those
4
which will be taken forward.The facilitator works with the participants to group the ideas 
thematically and eliminate any duplication and ideas irrelevant to the question. Mind mapping 
can be used during this phase to show the linkages between the different ideas or hypotheses.
This may prompt further ideas, which should be allowed and added to those already generated.
One idea per post-it note
B
A
C
F
D
E
Next, further reduction, if appropriate, can take place.There are two main ways of reducing
5
ideas. Participants vote for their favourite ideas using a simple voting system or they discuss
the ideas and the facilitator sees what emerges.
Social
B
Political
A
D C
F
E
Military
13

QW_V1-5_START52  17/3/16  16:04  Page 14
‘Boultbee’s Criterion’ says that if the converse of a statement is absurd,
the original statement is an insult to the intelligence and should never
have been said.
Take your key judgements in turn, and ask yourself how you would
KNOW if they were FALSE. If you can’t easily think of a way they could
be disproved, they are probably vacuous.
14

QW_V1-5_START52  17/3/16  16:04  Page 15
HYPOTHESIS TESTING
HYPOTHESIS TESTING
“Never assume the obvious is true”
William Safire
Hypothesis testing is appropriate for closed questions about the present, such as:
Is RED’s nuclear programme purely civil?
Is RED providing technical assistance to GREEN’s armed forces?
These kinds of questions ask you to come to a conclusion about the likelihood of particular,
pre-defined hypotheses. Often, open questions about the present will require you to move
on to a ‘hypothesis testing’ phase once you have generated some candidate hypotheses.
Hypothesis testing techniques are designed to help analysts establish what assumptions a
hypothesis rests on, whether it is likely to be true and if so with what probability, or which
hypothesis from a range of possibilities is most consistent with the data.
Although you don’t need to use an explicit hypothesis testing technique, not doing so runs
the risk of you falling prey to various cognitive traps, including:

the anchoring effect (the tendency to place undue weight on the first piece of
information you come across);

the availability heuristic (the tendency to think of ideas that are particularly
salient, rather than likely);

overconfidence about the most likely hypothesis; and 

groupthink.
15



QW_V1-5_START52  17/3/16  16:04  Page 16
HYPOTHESIS TESTING
QUICK ANALYSIS OF COMPETING HYPOTHESES (ACH)
4 hrs
What is it? 
Assuming you have a set of hypotheses already generated, ACH involves identifying a list of
potentially-relevant pieces of information and assessing their consistency with those
hypotheses.The approach is designed to help analysts consider all the evidence in the light
of all the hypotheses as objectively as possible.Without this kind of structure, there is a risk
that the hypothesis which ‘survives’ the first few pieces of evidence will stand unchallenged
even when contradictory evidence arises.
A busy analyst can use a pared down version to organise their thinking on a sudden new
development of interest (e.g. the assassination of a leader by parties unknown or the inter-
diction of a component which might have a use in a weapon system) which:
(i) requires the rapid analysis of a small set of hypotheses (e.g. five or fewer) to explain it;
and 
(ii) involves a limited set of evidence (e.g. 10 - 20 items) to manage.A simple ACH of this 
kind can help analysts quickly see how or if the available data relates to the hypotheses 
and thus which of them are more or less likely, and what the intelligence gaps are.
Other techniques and approaches to use with it:

Environmental scanning (e.g. STEMPLES) to widen the scope of your thinking, to
encourage the identification of all relevant pieces of information and to organ-
ise them into categories (see page 10).

It uses a matrix to list the hypotheses and the key evidence (see page 48).

It involves filtering evidence to assess its usefulness (see page 52).
Level of effort required
Applying ACH to a new development as described above can be done relatively quickly in a
few hours using a simple matrix by a busy analyst at their desk or a group of 6 – 8 analysts
(see ‘Event planning rules of thumb’ on page 63 for more information on group size).
The methodology output provides a useful framework for structuring and producing a
product quickly and a clear audit trail for explaining or justifying an assessment.
Remember that an ACH exercise can be labour-intensive if applied rigorously to a long-
standing problem. It would involve gathering and evaluating a large quantity of data against a
range of hypotheses over the course of weeks or months. Such an exercise might also
require the use of a spreadsheet to sort the evidence in various meaningful ways (e.g. by
diagnosticity, relevance, reliability or credibility, source, date etc).
How to do it
Create a matrix and write your hypotheses along the top, one hypothesis per column. Down
1
the vertical axis make a list of all the evidence that is relevant to evaluating the hypotheses
and thus answering the question posed, listing one distinct piece of evidence per row.This
could include known facts, assumptions, arguments and the absence of things you would
expect to see if a hypothesis were true. Listing assumptions and arguments as well as hard
facts can provide you with useful insights about the issue. For example, if your entries are
predominantly assumptions and arguments and not hard facts, the assumptions will need to
be examined carefully.
16

QW_V1-5_START52  17/3/16  16:04  Page 17
HYPOTHESIS TESTING
QUICK ANALYSIS OF COMPETING HYPOTHESES (ACH) (cont.)
HYPOTHESIS
EVIDENCE
CREDIBILITY
H1
H2
H3
H3
H4
H4
H5
H6
ITEM 1
ITEM 2
ITEM 3
ITEM 4
ITEM 5
ITEM 6
ITEM 7, ETC
TOTALS
You may well wish to annotate your evidence with report reference information and use a
colour code to rate its reliability/credibility using a simple system, e.g. green for high, amber
for medium and red for low as shown immediately above.
Then work down your list of ‘evidence’ assessing each item against the hypotheses using a
2
standard key1 as follows:

If it is highly likely, or almost certain, that you would see this evidence if the
hypothesis were true (i.e more than 75% likely), put ‘4’ in that box.

If it is likely (between 50% and 75% probable), put ‘3’ in that box.
N/A
If it is a realistic possibility (between 25% and 50% likely) put ‘2’ in that box.
?
If it is unlikely (between 10% and 25%), put a ‘1’ in the box.
Analysts can also use a hard or big  to indicate that if the evidence is true, then a 
hypothesis must be false. Analysts should be wary of denial and deception, misinformation
and source reliability when considering the use of a hard  to refute a hypothesis.
The last row in the matrix is to record the total number of  and against each hypothesis.
Refine the matrix: this involves examining the utility of the evidence in the matrix and 
discarding any that either has little or no diagnosticity or is only applicable to one or two
3
hypotheses. If evidence is discarded it should be retained for the audit trail, and it is wise 
to leave a record of the evidence in your matrix in case it becomes relevant at a later stage.
Eventually your matrix may look something like this:
1
Analysts will find several different keys are used in ACH. Analysts may wish to adopt their own key if it
assists their analysis, which is acceptable tradecraft practice provided the key is displayed with the ACH
matrix. For example, some analysts prefer to use CC, C, I and II to represent wholly consistent, consistent,
consistent, inconsistent and wholly inconsistent.
17

QW_V1-5_START52  17/3/16  16:04  Page 18
HYPOTHESIS TESTING
QUICK ANALYSIS OF COMPETING HYPOTHESES (ACH)
HYPOTHESIS
EVIDENCE
CREDIBILITY
H1
H2
H3
H3
H4
H4
H5
H6
ITEM 1

   

ITEM 2






ITEM 3


?
  
ITEM 4
     
ITEM 5
 N/A  


ITEM 6

    
ITEM 7
 


?
?
ITEM 8
 
?



ITEM 9
 




ITEM 10

?
N/A
?
N/A
N/A
ITEM 11
 

 

TOTALS
4/7
5/4
5/3
4/6
6/3
5/4
4
When conducting an ACH, there are a few things to bear in mind:
• Be aware that evidence scoring the same across all hypotheses is non-diagnostic and does
not tell you anything useful about your hypotheses.This kind of evidence can be put to
one side. (This applies to evidence item 4 in the example);
• Pay most attention to the most diagnostic evidence - i.e. that which is highly 
consistent with some hypotheses and inconsistent with others. (Using the example
above, evidence items 3, 7,9 and 11 are the most diagnostic and items 2 and 4 the
least diagnostic);
• Double-check how reliable/credible the most diagnostic evidence is – especially if
your conclusions hinge upon it. (Using the example above, item 1 is the most 
reliable/credible and item 5 is the least reliable/credible;)
• Consider how many scores are based on the same underlying assumption and 
double-check your confidence in those assumptions;
• Consider whether your overall conclusions would change significantly if these pieces
of evidence were interpreted differently, were wrong or were deceptive;
18

QW_V1-5_START52  17/3/16  16:04  Page 19
HYPOTHESIS TESTING
QUICK ANALYSIS OF COMPETING HYPOTHESES (ACH) (cont.)
Bear in mind that the approach is not a silver bullet.Your results may well be inconclusive
as shown in the example (two or more hypotheses being equally well-supported).
The results of an ACH can be used to help you to:
• engage with collectors with regard to the most diagnostic intelligence reports 
(e.g. to clarify content and reliability/credibility and see whether further information
might be available);
• consider what evidence would help to further distinguish between the various hypotheses
and thus steer collection requirements and engagement with intelligence allies; and
• highlight intelligence gaps.
Discuss the relative likelihood of all the hypotheses when reporting your conclusions.
Explain the significance of the diagnostic evidence in distinguishing between the relative
likelihood of the hypotheses.
Tips
Following the steps and structures outlined above allows an analyst to conduct a very
simple exercise at their desk. Doing such an exercise alone, however, may not be as
effective as having a group of analysts analyse the evidence against the hypotheses.
Use a simple table in Word as shown in the example above or an Excel spreadsheet.
For generic guidance on planning and running a simple ACH event, see ‘Analytical Events’ on
page 63. Below are a number of tips that should help you organise a simple group ACH.
For workshops, make sure you have empty matrices with the exam question and the
hypo-theses printed onto A1 paper or draw them onto whiteboards in advance.They
can then be completed during the event.They provide helpful structure, keeping both
facilitators and analysts focused and systematic.
19



QW_V1-5_START52  17/3/16  16:04  Page 20
HYPOTHESIS TESTING
KEY ASSUMPTIONS CHECK (KAC)
10 mins
What is it? 
If a most likely hypothesis has been established, a KAC involves identifying all the underpinning
assumptions behind it, and making judgements as to how: (i) important to the hypothesis 
(i.e. ‘load-bearing’); and (ii) well-supported they are. It allows you to check your analysis (or 
that of others), potentially exposing firmly held assumptions that may have gone unchallenged
over time.This is especially important for intelligence analysts who routinely have to make
assumptions to fill gaps where information is incomplete or ambiguous.
A KAC can be conducted at the start of an analytical project by identifying and testing all your
working assumptions underpinning a current assessment. It is also useful, however, at the draft 
or coordination stage when you are seeking the input of other SMEs. For really important 
assessments consider conducting two KACs - one early on and one at the draft stage.
Other techniques and approaches to use with it:

Matrices to list and rate the assumptions (see page 48).

Filtering to identify the most important and least well-supported assumptions 
(see page 52).

Structured brainstorming (see page 12).
Level of effort required
Assuming there is only one analytic line of interest, and that it has been identified in advance,
this can be done by a group of 6 – 8 analysts (see ‘Event planning rules of thumb’ on page 63
for more information on group size) in less than two hours. Generating the list of key assump-
tions in advance of an event will also reduce the workshop length. An analyst at their desk 
could almost certainly complete a KAC faster than a group of analysts, but KAC works much
better as a group activity as a range of perspectives will be considered.
How to do it
1
Identify your ‘analytic line’ to be tested. It might look as follows:
RED is GREEN’s most important military supplier. RED has also been pivotal in assisting
GREEN with its WMD programmes which are now reaching maturity.
List all of the key assumptions that you believe underpin the analytic line, i.e. those that are
2
accepted as being true for the conclusions to be valid. For the example above these would
look as follows:
NUMBER
POSSIBLE INTERPRET
ASSUMPTIONS
ATIONS
1
RED is supplying GREEN militarily
2
GREEN has no other significant military supplier
3
RED has provided GREEN with non-military goods and training
4
GREEN has WMD programmes
5
GREEN depends on RED assistance for its WMD programmes
6
GREEN’s WMD programmes are reaching maturity
7
...etc.
20

QW_V1-5_START52  17/3/16  16:04  Page 21
HYPOTHESIS TESTING
KEY ASSUMPTIONS CHECK (KAC) (cont)
After you have developed as complete a list as you can, go back and critically examine each
assumption using the following questions to aid your thinking:

If it were false, how seriously would this undermine the analytic line?

How much confidence do you have that this assumption is valid?
-
Why do you have this degree of confidence?
-
Under what circumstances might this assumption be false?
-
Could it have been true in the past but no longer true today?
-
What would we expect to see if this assumption were true?
-
Why aren’t we seeing these indicators?
Based on these, score each assumption according to two criteria:

RELEVANCE:
-
Largely irrelevant to analytic line (0)
-
Important - analytic line would be significantly less likely if assumption
were false (1)
-
Essential - analytic line cannot be true without assumption (2)

SUPPORT:
-
Unsupported or very questionable (0)
-
Correct with some caveats (1)
-
Solid (2)
We are looking for the shaky, load-bearing assumptions. Find the assumptions which score
3
highest for ‘relevance’. Of these, the assumptions with the lowest ‘support’ scores are the
key uncertainties. A matrix template like the one below should be used to filter your key
assumptions and provide a clear structure for the exercise. Use a comments column to
record the rationale behind the results of your confidence check. This might relate to the
quality and quantity of evidence, reliability of sources etc.
In this example, the scores suggest that items 4 and 5 require revisiting as they are essential
to the analytic line but unsupported. Item 3 can be ignored.
Consider whether the key uncertainties identified have revealed collection requirements.
The number of key uncertainties will also dictate whether/how much the analytic line
requires further research and analysis, including contact with collectors, to ensure it is as
robust as possible and accurately reflects the available information.
ASSUMPTION
RELEVANCE
SUPPORT
RED is supplying GREEN militarily
2
2
GREEN has no other significant military supplier
1
1
RED has provided GREEN with non-military goods and training
0
2
GREEN has WMD programmes
2
1
GREEN depends on RED assistance for its WMD programmes
2
0
21

QW_V1-5_START52  17/3/16  16:04  Page 22
HYPOTHESIS TESTING
KEY ASSUMPTIONS CHECK (KAC)
Tips
For generic guidance on planning and running a simple KAC event, see ‘Analytical Events’ on
page 63. Below are a number of tips that should help you organise a simple group KAC.
Include one or two experienced analysts who have some familiarity with the topic, but who
are not currently working on the subject matter and are not constrained by the prevailing
view on the analytic line.The facilitator should make clear at the start that those taking part
need to be prepared for and open to the fact that the analytic line may be wrong.
The list of assumptions underpinning the analytical line in question can either be:

generated by the participants and collated into a single list prior to the workshop
and then reviewed and checked during the workshop;

both generated and checked during the workshop.
If you plan to do the latter, DI FAM recommends some silent brainstorming initially during
which each participant is asked for a limited number (e.g. 3 – 5) of key assumptions on
sticky notes, though it is also acceptable to ask participants to just call out their ideas and 
a facilitator to note these down on a whiteboard or flipchart. See also ‘How to do it’ on
page 12, as a simplified version of structured brainstorming to generate the key assumptions
would work well in this instance.
Remember that generating the key assumptions during the event is likely to make your
workshop considerably longer, though how long will depend on the complexity of the 
analytic line. For most problems it is likely to take 30 - 60 minutes. Generating key assump-
tions prior to the event allows you to plan how long you will need to filter them and 
complete your matrix. Allow up to 30 mins to consider and discuss the KAC results and
their implications for the analytic line.
Make sure the analytic line and the prompting questions are displayed where all can see
them (e.g. on a whiteboard, flipchart or the wall). An A1-size pre-printed matrix attached to
a whiteboard or wall, or a matrix hand drawn onto a whiteboard should be used for the 
exercise.These props will help to keep both the facilitator and the participants systematic 
in their approach and focused on the tasks at hand.
22

QW_V1-5_START54  14/4/16  12:35  Page 23
SCENARIO GENERATION
SCENARIO GENERATION
“The impossible sometimes happens; the inevitable sometimes does not.”
Daniel Kahneman
Scenario generation helps answer open, future-focused questions such as:
How would RED react to international pressure over its nuclear activities?
What will GREEN’s leadership be like in 2020?
Such questions ask you to generate a set of hypotheses about the future, or ‘scenarios’.
Often, the customer is interested only in getting an idea of the range of possible futures, so
they can make their decisions or policies as robust as possible. Sometimes, however, the 
policy customer needs an idea of which scenario (or category of scenarios) is most likely, in
which case the scenario generation phase should be followed by a scenario evaluation of 
the generated scenarios.
Structured scenario generation can help you overcome the natural tendency to assume
that the future will look much as it does today or that the future is unknowable so there
is no point trying to envisage what it may entail.They can allow you to identify plausible
alternatives and longer term perspectives that challenge conventional thinking, and 
encourage a better understanding of the key drivers behind an issue in your area of
expertise. Generating a range of scenarios can be useful for a range of customers who
need to have strategies for a variety of outcomes.
You don’t have to use a scenario-generation technique in responding to an open, future-
focused intelligence requirement. Not doing so, however, means you might be more likely
to fall prey to cognitive traps, including groupthink, hindsight bias, failure of imagination,
availability and recency effects, and mirror-imaging.
23



QW_V1-5_START52  17/3/16  16:04  Page 24
SCENARIO GENERATION
‘BREAKING THE MIRROR’ (OR RED TEAMING AT DESK)
< 2.5 hrs
What is it? 
Red Teaming has a range of interpretations, but as a specific analytical technique or approach
it usually involves trying to adopt the mindset of a foreign protagonist to think through their
policy or strategy on a particular issue.
Preparing a Red Teaming event, however, can be time-consuming and labour-intensive.
This pared down or basic version of Red Teaming - which is based around SWOT Analysis
(see page 40) - can allow an analyst at their desk to develop potential courses of action for
a state or group of interest.
Other techniques and approaches to use with it:

SWOT Analysis is a useful framework for examining an organisation’s position
with regard to a particular situation, given its objectives (see page 40).The results
can be used to infer its potential actions and where appropriate to identify the
UK’s own vulnerabilities.

A suitable environmental scanning category, such as STEMPLES, to widen the
scope of your thinking and encourage the identification of all relevant pieces of
information (see page 10).
Level of effort required
Much depends on the nature of the question. Assuming, however, you have clarified the
question and already gathered some suitable background material, a simple ‘Breaking the
Mirror’ desk exercise could be completed in as little as 60 mins.
How to do it
Before you commence your exercise, make sure you are clear about your intelligence
1
question and then rephrase it from the protagonist’s perspective. For example,
How would insurgency group RED respond to GREEN’s military withdrawal from
Redistani territory, and move itself closer to taking power?
might become:
How can we exploit the invaders’ withdrawal from soil which is rightfully ours and
move ourselves closer to leading our people?  
Next, list all the key objectives of the protagonists as you understand them which relates to
2
the issue at hand.
KEY OBJECTIVES OF RED STRATEGY
DURING WITHDRAWAL
• Make sure GREEN leaves for good
• Be the only credible candidates for government of Redistan
• International recognition of RED government
• Support from Redistani people.
24

QW_V1-5_START57  18/4/16  10:45  Page 25
SCENARIO GENERATION
‘BREAKING THE  MIRROR’  (cont.)
Formulate appropriate SWOT questions to help you - as RED - identify your internal
3
strengths and weaknesses as well as the external factors presenting opportunities and
threats with regard to your objectives as described.
STRENGTHS
WEAKNESSES
Which of our characteristics
Which of our characteristics
will help us achieve our
will work against the achieve-
objectives?
ment of our objectives?
OPPORTUNITIES
THREATS
What events or developments
What events or developments
might help us achieve our
might work against the
objectives?
achievement of our objectives?
Whilst answering your four SWOT questions use STEMPLES to help you widen the scope
of your thinking.
Professional Head of 
Defence Intelligence Analysis
Environmental Scanning
STEMPLES
SOCIAL
TECHNOLOGICAL
ECONOMIC
MILITARY
POLITICAL
LEGAL
ENVIRONMENTAL
SECURITY
25

QW_V1-5_START52  17/3/16  16:04  Page 26
SCENARIO GENERATION
‘BREAKING THE  MIRROR’  (cont.)
STRENGTHS
WEAKNESSES
• Unity of purpose – we will rule *
• We lack medical equipment and training
• We have choice over location and nature 
• We have no defence against invader aircraft
of engagement
• We lack conventional firepower
• The invaders are leaving and we know it *
• Contact with senior leaders is difficult *
• We represent the population
• We cannot rely on all commanders to
• We have infiltrated GREEN’s forces  
enact orders
• Our networks are resilient and adaptable
• We have too much autonomy at lower levels
• We have support from country BROWN *
• We have a weak, compromised and very
• We have good media capability
slow communications network *
– can send messages of victory
• We generate negative publicity due to
• We have long military experience and an
civilian casualties
experienced leadership
OPPORTUNITIES
THREATS
• Withdrawal timescales are driven by the
• Threat from our enemies in ORANGE
invaders’ political masters *
province increases
• It will be generally much easier for us to 
• GREEN’s capability to target and
operate and our leaders will be able to 
incarcerate our fighters increases
return home more often
• Invaders change their minds and stay *
• Media provides a global and domestic 
• Invaders manage to bribe our fighters
audience *
to switch sides
• Retreating invader forces will be sloppy,
complacent and predictable, and lack resolve
26

QW_V1-5_START52  17/3/16  16:04  Page 27
SCENARIO GENERATION
‘BREAKING THE  MIRROR’  (cont.)
Once you have answered your SWOT questions, and are still thinking from RED’s point of
view, pick the key two or three items from each quadrant (see the starred (*) entries in the
example on the left) and identify a best course or courses of action to exploit strengths and
opportunities, and mitigate weaknesses and threats. See the table below for ideas generated
from the populated SWOT matrix.
SUGGESTED STRATEGIES FOR RED
• No need to change tactics drastically - we can sit and wait
• Solicit military aid from BROWN before GREEN forces leave
• Invest in more secure communications equipment
• Expand communications strategy to encompass new media
• Ensure we don’t provoke GREEN into staying any longer
• Reassure BROWN that we will leave their territory when GREEN has withdrawn
Finally, develop these into a more detailed action plan. Use your knowledge of the intelligence
to assess the extent to which the protagonist is considering or planning along these lines.
If you have generated any particularly high-impact potential actions that you hadn’t previously
considered, you might want to amend your collection plan to take account of them.
27

QW_V1-5_START52  17/3/16  16:04  Page 28
SCENARIO GENERATION
CONE OF PLAUSIBILITY
What is it? 
The Cone of Plausibility allows the generation of a range of plausible scenarios that describe
how a subject area may look after a given timeframe (e.g. a few weeks through to some 
20 – 25 years hence). It provides a clear audit trail to explain how they are reached.
However, simply generating key drivers relating to a subject area (without generating 
scenarios) can provide you with useful insights about what factors are the most important
in shaping the future.
Other techniques and approaches to use with it:

Brainstorming to identify key drivers (see page 12) 

Environmental scanning (e.g. STEMPLES) to widen the scope of your thinking
and encourage the identification of all key drivers (see page 10).

Voting and ranking (i) to identify the most important drivers if more than 7 are
generated (ii) to identify which assumptions to change (see page 50)
Level of effort required
Assuming the question and timeframe is clarified before a workshop a group of 6 – 8 analysts 
(see ‘Event planning rules of thumb’ on page 63 for more information on group size) could:

complete up to 5 simple scenarios in a minimum of two and a half hours;

identify key drivers in around 30 mins.
An analyst at their desk could probably complete both tasks faster.The Cone methodology
output provides a useful framework for structuring and producing a product quickly as well
as presenting the results effectively.
How to do it
1
Determine the question and set a timeframe, for example:
‘What will RED’s international status be in 2020?’
The timeframe may be determined by the task at hand or specified by the customer.
Alternatively, it may be determined by forthcoming issues that an analyst deems important
(e.g. elections, age of a dictator, expected acquisition of a military capability).
Identify the drivers.These are the forces shaping current events in the subject area.
2
They should be written as neutral statements e.g. “oil price” rather than “rising oil prices”
or “leadership” rather than “repressive leadership”.They should also be forces that you
judge will remain relevant over the timeframe. Use STEMPLES or similar to encourage
identification of all key drivers. DI FAM recommends that you generate 5 - 7 drivers for
most exercises. If substantially more drivers are generated then you should select the 5 - 7
drivers they consider to be the most important.
Make judgements (‘assumptions’) about how the drivers will behave over the timeframe.
3
Try to be as specific as possible when wording your assumptions. For example, rather than
saying “the economy will grow”, a more specific assumption could be “the economy will 
continue to grow at the same rate as in recent years”.The more specific the assumption,
the less ambiguity and doubt there will be in the minds of analysts and customers. Generate
only one assumption per driver.
28


QW_V1-5_START57  18/4/16  10:43  Page 29
SCENARIO GENERATION
CONE OF PLAUSIBILITY (cont.)
ojec-
 or

 pr
,use a
y a
ved”)
d) scenarios
This is usuall
om them.
e and wildcar
should not contain the kind o
Use either the past (“RED has achie
eating a scenario fr
our plausible alternativ
This means the scenarios 
or that matter y
ers and assumptions and cr
eady come to pass.
y used in intelligence assessments.
ving”) tense can help to do this.
our baseline (and f
y) commonl
our list of driv
e that has alr
For y
ma
y taking y
y,could,
rent situation.
possibl
.g.
uous (“It is 2030 and RED is achie
d of the cur
e style and write them as a futur
orwar
esent contin
Generate a baseline scenario b
tion f
narrativ
qualifying language (e
the pr
4
29


QW_V1-5_START57  18/4/16  10:40  Page 30
SCENARIO GENERATION
CONE OF PLAUSIBILITY (cont.)
y
ely
e lik
mor
 not deliberatel
ve
ou judge is 
elated to the social
ou ha
 change the assumption r
e
 on the other assumptions y
y changing an assumption that y
ve
,if w
y ha
.
For example
y to do this is b
y emerge
Then consider the impact on the baseline of the change made to that
 ma
ve
our scenario description.
.The most common wa
ppear in y
e scenario
e scenario as described abo
y possible impacts that the changed assumption ma
ver the time-frame of the study than the others.
Consider an
These impacts should a
,a plausible alternativ
e
Generate a plausible alternativ
to change o
assumption.
changed.
structur
5
30


QW_V1-5_START57  18/4/16  10:40  Page 31
SCENARIO GENERATION
CONE OF PLAUSIBILITY (cont.)
,
.
.For example
y changing the 
y emerge
w ma
obability scenario
w pr
d” scenario as described belo
oduce a high impact/lo
“Wildcar

est,
W
.This should pr
e-examining the assumptions underpinning the baseline and radicall
elations with the 
y r
ely to change
s r
least lik
d’ scenario b
‘wildcar
ou judge to be the 
 change the assumption about RED’
e
Generate a plausible 
assumption y
if w
6
31

QW_V1-5_START52  17/3/16  16:04  Page 32
When you suspect that two phenomena A and B are related, there are
always five possibilities to consider:
1.
That A causes  B
2.
That B causes A
3.
That A and B are both caused by a separate phenomenon, C
4.
That the apparent relationship between A and B is a coincidence.
5.
That the data are wrong.
32

QW_V1-5_START52  17/3/16  16:04  Page 33
SCENARIO EVALUATION
SCENARIO EVALUATION
“There is a tendency in our planning to confuse the unfamiliar with the improbable.
The contingency we have not considered seriously looks strange; what looks strange 
is thought improbable; what is improbable need not be considered seriously.”
Thomas Schelling 
Scenario evaluation is appropriate for closed, future-focused questions such as
Will RED withdraw from the Nuclear Non-Proliferation Treaty by 2020?
Will there be a peace treaty between RED and GREEN by 2015?
These types of questions invite the analyst to investigate specific, defined scenarios, typically 
to identify warning indicators, identify key points of influence, or in terms of their probability.
Scenario evaluation techniques are designed to give analysts proven structures for their
approach to questions of this sort, which are extremely common in the field of intelligence
analysis.
Scenario evaluation techniques raise your awareness about what is essential for a high impact
outcome to occur - however unlikely you may deem it - and what would be seen (i.e. indicators)
were it coming to pass.You may have some idea of what would lead to particular scenarios,
but without systematically identifying, filtering and monitoring and reviewing indicators you run
the risk of missing key indicators and being surprised by events. Using such techniques helps to
expose intelligence gaps and the limitations of intelligence with regard to specific warning
problems, helping you to demonstrate quickly and clearly to seniors and customers why the
precise timing of a high impact outcome was not predicted.
You don’t have to use a structured scenario evaluation approach in answering questions of 
this sort, but not doing so means you run the risk of various cognitive traps, including failure 
of imagination, availability and recency effects, and a number of biases associated with the way
we comprehend and process information about the future.
33



QW_V1-5_START52  17/3/16  16:04  Page 34
SCENARIO EVALUATION
‘BACKCASTING-LIGHT’ (QUICK I&W)
< 2.5 hrs
What is it? 
Backcasting provides analysts with a framework to explore how future outcomes (usually 
high impact) could come about. It is a useful first stage for tackling a warning problem and
usually involves:
(i) specifying an outcome and a timeframe for it;
(ii) establishing what is essential (i.e. key assumptions) to bring it about; and 
(iii) using these key assumptions to plot  a timeline of plausible events and trends leading 
to the outcome. Some of these events and trends may serve as useful I&W once further 
analysed.
If busy, however, you can simply use stages (i) and (ii) of backcasting as they can provide 
valuable insights into the necessary pre-conditions for a particular outcome to occur.
If you have more time and wish to do a backcast, see Chronologies and Timelines on page 44
and the PHDIA Backcasting Analysis Guidance Note.
Other techniques and approaches to use with it:

Environmental scanning (e.g. STEMPLES) to widen the scope of your thinking and
encourage the identification of all factors that are essential for the outcome to
occur and to organise them by theme (see page 10).

Chronologies and timelines (see page 44): Develop potential indicators – i.e.
events and trends - from your key assumptions and plot them in chronological
order on a timeline.

Filtering (see page 52):Take the indicators developed and filter them by a range
of appropriate criteria (such as relevance, uniqueness, observability, timeliness) to
identify which are useful for monitoring a warning problem over time.
Level of effort required
Assuming the outcome to be analysed has been established in advance of a workshop, a 
group of 6 – 8 analysts (see ‘Event planning rules of thumb’ on page 63 for more information
on group size) could probably generate the key assumptions in around 30 minutes, though 
this depends on the complexity of the future outcome being considered. An analyst at their
desk could probably complete this more quickly.
34

QW_V1-5_START52  17/3/16  16:04  Page 35
SCENARIO EVALUATION
‘BACKCASTING-LIGHT’ (QUICK I&W) (cont.)
How to do it
Establish the short scenario or future outcome (and timeframe) which is to be examined.
Go through the scenario in detail to make sure there are no ambiguous terms in it 
(e.g. use ‘regular demonstrations numbering thousands of people’ rather than ‘unrest’).
Ask yourself the following key question:
“What would have to happen for this scenario to come about?”
Use STEMPLES to help you generate these ‘key assumptions’ which the scenario depends
on. Remember that external factors should be considered (e.g. issues such as the behaviour
of regional players and global trends like oil prices if the outcome relates to developments
in a particular state).
5 – 10 key assumptions are usual if the future outcome is relatively simple. Outcomes
involving numerous players (e.g. the achievement of a comprehensive Middle East peace)
may involve considerably more.
Review your ideas to ensure nothing is missing.
If required the next stage would be to produce an illustrative timeline setting out how a
scenario might unfold. See page 46 for an example.
35

QW_V1-5_START52  17/3/16  16:04  Page 36
SCENARIO EVALUATION
SCENARIO KEY ASSUMPTIONS CHECK
What is it? 
Once a ‘most likely’ scenario has been established, a scenario KAC involves identifying all 
the underpinning assumptions behind it, and making judgements as to how important and
well-supported they are. It allows you to check your analysis (or that of others) with
regard to future developments or outcomes, including exposing firmly held, hidden (i.e.
unconscious) assumptions which may have gone unchallenged over time.This is especially
important for intelligence analysts who routinely have to make assumptions to fill gaps
where information is incomplete or ambiguous etc.
Other techniques and approaches to use with it:

Matrices to list and rate the assumptions (see page 48).

Filtering to identify the most important and least well-supported assumptions
(see page 52).

Structured brainstorming (see page 12)
Level of effort required
Assuming there is only one key scenario of interest, and that it has been identified in
advance, this can be done by a group of 6 – 8 analysts (see ‘Event planning rules of thumb’
on page 63 for more information on group size) in less than two hours. Generating the list
of key assumptions in advance of an event will also reduce the workshop length. An analyst
at their desk could almost certainly complete a KAC faster than a group of analysts, but
KAC works much better as a group activity as other perspectives are required.
How to do it
Using your scenario in place of a hypothesis follow the process set out on page 20.
36

QW_V1-5_START52  17/3/16  16:04  Page 37
DATA ORGANISATION
DATA ORGANISATION
“Effective analytic designs entail turning thinking principles into seeing principles.”
Edward Tufte
The final category, data organisation, applies to all questions.There are numerous such
approaches at the analyst’s disposal. In some cases, it may be sufficient to use them alone
(e.g. SWOT analysis) to tackle a question, particularly closed questions about the present
that require you to determine whether or not something is the case. However, data 
organisation is useful for supplementing the four categories of technique. Some data 
organisation approaches – like the use of environmental scanning and matrices – are almost
universally helpful.
Analysts have to scan and assess ever increasing amounts of information which can feel
over-whelming. Used alone or with other approaches data organisation can help you in a
variety of ways. These techniques can save time by providing ready made checklists of
things to consider and useful categories for grouping ideas.They also reduce complexity by
helping by allowing you to present large amounts of data visually aiding your own, senior
and customer understanding of an issue.
37



QW_V1-5_START52  17/3/16  16:04  Page 38
DATA ORGANISATION
ENVIRONMENTAL SCANNING CATEGORIES
10 mins
What is it?
Environmental scanning is the use of simple mnemonics (e.g. STEMPLES)  to widen the
scope of your thinking and generate a broad range of ideas or explanations – that is
hypotheses - for particular developments seen (see page 10). However, the categories them-
selves can be used to group ideas generated or intelligence gathered into themes for the
purposes of interpreting them to help answer the question at hand.
Other techniques and approaches to use with them:

Use matrices to present ideas in themes (see page 48)

Use environmental scanning categories to organise data generated during,
for example:
-
Structured Brainstorming (see page 12);
-
SWOT Analysis (see page 40);
-
Breaking the Mirror/Red Teaming (see page 24);
-
ACH (see page 16);
-
Cone of Plausibility (see page 28);
-
Backcasting-Light (see page 34);
-
Analysis of Warning Indicators/I&W (see Filtering on page 52).
Level of effort required
1
Minimal effort is required. Using environmental scanning categories to organise ideas can
help speed up interpretation of data and the production of written assessments as well as
help you present your data well.
How to do it
2
Simply use appropriate environmental scanning categories to help you cluster your ideas
into themes. In some instances this might be required during a group activity (e.g. brain-
storming and mind mapping). In other instances this will be when you are at your desk and
are seeking to organise and interpret data to produce an assessment.
3
See opposite an example of how ideas generated during a brainstorming can be clustered
into STEMPLES themes to interpret and present them.The matrix quickly reveals that the
bulk of the drivers leading states to develop or acquire WMD programmes generated were
deemed either military/security-related or political (including personality-based drivers).
Another commonly-used acronym is TEPID OIL, which describes eight components of
‘capability’: training, equipment, personnel, information, doctrine and concepts, organisation,
infrastructure, and logistics. ‘SWOT’ is another simple environmental scanning acronym.
Acronyms along these lines are useful to make sure you have considered a wider range of
factors than you might have done without such structures.
38

QW_V1-5_START52  17/3/16  16:04  Page 39
DATA ORGANISATION
ENVIRONMENTAL SCANNING CATEGORIES (cont.)
DRIVERS OF WMD PROGRAMMES OR ACQUISITION
Social
A galvanising individual  (e.g. scientist) inspires or facilitates a programme
Technological
Access to necessary technology at home
Access to an ally’s technology
Environmental
Availability of fissile material
Military
Need to counter existing threats from hostile (or simply powerful) states from neighbours and beyond
Security
The need to counter future threats from the above
Weak conventional forces and faced with threats as above – WMD seen as force multiplier
NBC programmes offer advantage of covert development or at least ambiguity
Political
Leadership or regime paranoia
Satisfy the ego of a leader, or Regimes, wanting to bolster prestige at home and abroad 
Keep the armed forces satisfied
Programme could serve as bargaining chip
Legal
Perception that control frameworks are not very effective may lead states to assume they can get away
with developing a capability while remaining a signatory to agreements
Economic
States unable to afford adequate conventional forces may believe WMD provides comparable effect at
less cost where dual use technology available in country 
Economic problems may prompt state to develop WMD to sell them for profit
39



QW_V1-5_START52  17/3/16  16:04  Page 40
DATA ORGANISATION
SWOT ANALYSIS
<2 hrs
What is it? 
SWOT is a simple mnemonic which helps you to classify the strengths, weaknesses, opportuni-
ties and threats to/for in an organisation’s (e.g. government, armed forces, terrorist or insurgent
group) ability to achieve their objectives. Strengths and weaknesses are internal to the organisa-
tion and the opportunities and threats are generated by the external environment.
Other techniques and approaches to use with them:

Matrices (see page 48): SWOT uses a simple matrix to capture the ideas generated.

Use a suitable environmental scanning approach, such as STEMPLES, to help you
widen the scope of your thinking and use its categories to organise your SWOT
results (see pages 10 and 40).

Use SWOT to structure Breaking the Mirror or Red Teaming exercises (see page 24)

Use filtering to ascertain the relative importance of the strengths, weaknesses, etc.
that you have generated (see page 52).
Level of effort required
Assuming the overall question and the four SWOT questions are clarified in advance a group
of 6 – 8 analysts (see ‘Event planning rules of thumb’ on page 63 for more information on
group size) could complete a simple SWOT analysis in around 2 hours. An analyst at their 
desk could almost certainly complete a SWOT faster. Any filtering of the ideas generated
would add to the times quoted.The methodology output provides a useful framework for
structuring and producing an assessment quickly and is good for presenting data clearly.
How to do it
1
First, it is very important to establish the objective of the state, organisation or individual 
that you are interested in. For example:
RED wishes to retake the disputed Indigo Islands from GREEN by force and sustainably
occupy them.
2
Then establish your four SWOT questions. In this case they would be as follows:
STRENGTHS
WEAKNESSES
What capabilities does RED cur-
What characteristics of RED work
rently have that would assist them
against their ability to retake the
in retaking the Indigo Islands?
Indigo Islands?
OPPORTUNITIES
THREATS
What might happen, outside RED’s
What might happen, outside RED’s
control, that would make it easier
control, that would make it harder
for them to retake the Indigo
for them to retake the Indigo
Islands? 
Islands?
40

QW_V1-5_START52  17/3/16  16:04  Page 41
DATA ORGANISATION
SWOT ANALYSIS  (cont.)
Then use an appropriate environmental scanning approach to ensure you widen the scope 
3
of your thinking and identify all the relevant strengths and weaknesses etc. In this case, you
would want to consider developing categories relevant to the structure of RED’s armed
forces (e.g. air, naval, ground forces and special forces).
Spend some time generating ideas for each of the four SWOT questions, inserting them in a
4
simple four box matrix as you do so. Review your ideas again using environmental scanning
to ensure you have not forgotten anything.
Look at your completed matrix and see what it tells you about your question. Consider
5
whether you need to filter your ideas to get a better feel for the relative importance of
strengths, weaknesses etc. For example, you may have identified seven strengths and only 
two weaknesses, but the weaknesses may nevertheless be more important than the
strengths overall. So it may help to filter all your ideas by significance to the overall exam
question using a simple scoring system such as 1 – 3 (1 being low significance, 2 being
medium significance and 3 being high significance).
See page 26, within the ‘Breaking the Mirror’ section, for a fully worked SWOT example
(from the perspective of a foreign protagonist).
41



QW_V1-5_START52  17/3/16  16:04  Page 42
DATA ORGANISATION
MIND MAPS
<60 mins
What are they? 
These are visual representations of concepts and the links between them. Mind maps show 
ideas (words or images) connected by lines to explain the relationship between them.They 
can help clarify your thinking on a topic or to help communicate it. Other techniques and
approaches to use them with

Structured brainstorming: use mind maps to organise and link ideas as or once
they are generated (see page 12).

Environmental scanning like STEMPLES to widen the scope of your thinking and to
encourage identification of all relevant pieces of information (see page 10).
Level of effort required
Assuming the question has been clarified in advance, a group of 6 – 8 analysts (see ‘Event 
planning rules of thumb’ on page 63 for more information on group size) working quickly 
could complete a structured brainstorm with mind mapping in around 60 mins. An analyst at 
their desk may be able to complete a mind map more quickly. Creating a mind map can help 
you express clearly a complex issue or problem and thus help you to quickly provide a useful
framework around which to write an assessment.
How to do it
Generate a list of ideas that relate to the question at hand.Write the question at the centre 
1
of the diagram.
Sort ten or so ideas at a time them into themes or groups that seem logical given the question.
2
Arrange the ideas to radiate from the central question starting with the most general themes.
Use lines to make connections between related ideas and arrows to show the direction of the
3
relationship – just one way or both ways. Don’t just link ideas outwards from the central 
question, also link them crossways, too, where appropriate. You may wish to label the connecting
lines with the ideas or - as shown in the diagram overleaf - just use lines to link the ideas.
Once all the ideas have been incorporated into the mind map, review it to see whether there 
4
are any obvious gaps in terms of the ideas themselves or the links between them.
An analyst working alone could either take ideas generated by a group or come up with their
own ideas using environmental scanning (e.g. using a mnemonic like STEMPLES). If you come 
up with a lot of ideas, it may well be quicker and easier to complete your mind map by using 
the sticky note and whiteboard/paper approach described above as you will inevitably change
things considerably before you are finally satisfied. Once finished you can then draw it on a 
piece of paper or use Excel to recreate it.
For using this technique in a group, once ideas have been generated through structured 
brain storming use the mind mapping approach described above to organise them. Have 
your question at the centre of a whiteboard or large sheet of paper so all participants can 
see it.Then the facilitator (preferably with a helper) should by way of discussion with the 
participants move the clusters of ideas written on sticky notes to radiate out from the 
central question, arrange and link them as appropriate. For generic guidance on planning 
and running a simple analytical event see ‘Analytical Events’ on page 63.
42

QW_V1-5_START57  18/4/16  10:39  Page 43
DATA ORGANISATION
MIND MAPS (cont.)  
MIND MAP
Clarifying
Structuring
thinking
Generating 
thinking
robust scenarios
 for contingency

planning
Generating
Identifying most 
and testing
discriminating 
Speed up
hypotheses
evidence
assessments
by...
Helping with
complexity
Help produce 
forward-looking
robust analysis 
Encouraging
for customers 
creativity
Exposing
by...
intelligence
gaps
Identifying 
signposts 
Inform 
and indicators
collection plans
for warning
by...
problems
Identifying
most crucial 
gaps
HOW CAN  
ANALYTICAL
TECHNIQUES 
HELP DIAS 
Confidence in 
ANALYSTS ?
assessment
Hidden
assumptions
Self
Mirror-imaging
Confirmation
Defend
bias
assessment
Avoid biasses
if it’s wrong
Provide assessments
and traps
with an intellectual
such as...
audit trail for
Group think
the benefit of...
Promoting task
force  working 
Anchoring
Customers /
(e.g. SMEs and  
decision
customers) 
makers
Others
through...
Deception
DIAS
colleagues
Analytical
Analytical
projects
Intelligence
Successors
Other
events
surprise
UK/Allied
SMEs
43



QW_V1-5_START52  17/3/16  16:04  Page 44
DA
D T
A A
T  ORGANISA
A
TION
 ORGANISA
CHRONOLOGIES AND 
CHR
TIMELINES
ONOLOGIES AND 
<60 mins
What are they? 
A chronology is a list of past or future events or actions in the order they occurred or may
occur. A timeline is a graphic representation of the events or actions over a specific time-
frame. Chronologies and timelines help to identify patterns and trends and reveal connections
between events or actions.They are also a useful tool for systematically creating a chain of
events and trends leading to a future outcome (e.g. such as a coup, the collapse of a regime, a
successful weapons programme, the deployment of military forces) and are therefore helpful 
in generating indicators for monitoring warning problems.
Other techniques and approaches to use it with

Backcasting-Light (see page 34): Use as a bolt on to Backcasting-Light if you
have time to do a full backcasting exercise 

Filtering (see page 52): Use to assess the usefulness of indicators developed
through creating a timeline.Those judged to be useful can then be used to
monitor a future outcome/warning problem.
Level of effort required
Timelines and chronologies can be used to order information (e.g. relating to a particular
theme of interest) as reports come to your desk.The amount of effort required will clearly
depend on the topic of interest and the quantity and nature of the incoming data.This effort
is counter-balanced by the fact that the output of the approach can help you analyse data 
and produce an assessment quickly.Timelines are also useful for helping you present com-
plex data either in a briefing or in an assessment.
As part of a backcasting exercise a group of 6 – 8 analysts (see ‘Event planning rules of
thumb’ on page 63 for more information on group size) could probably populate a timeline
(leading to a specific future outcome) in around 60 mins. An analyst working at their desk
could almost certainly create such a timeline more quickly.
How to do it
1
When preparing a timeline (it can be a vertical or horizontal line) summarise the events and
trends etc and add them with dates in chronological order. Much depends on what you are
trying to achieve, but consider the following:

Using the space on both sides of the line for your entries;

Colour coding the actions of different actors or have their behaviour along 
separate parallel lines;

Dividing your timeline into particular phases, if appropriate;

Including small pictures or symbols instead of text.
44

QW_V1-5_START52  17/3/16  16:04  Page 45
DATA ORGANISATION
CHRONOLOGIES AND TIMELINES (cont.)  
For populating backcasting timelines (which involve postulating future events leading to an
2
outcome rather than plotting a timeline of events that have already occurred) DI FAM 
recommends the following:

Using the key assumptions (i.e. what is essential to bring the outcome about) you
have developed as an aide memoire to develop the entries for your timeline;

Having a trigger at one end of the timeline to set off a chain of events to a future
outcome.You may need others to maintain momentum towards the outcome;

Having the future outcome and its date at the other end of the timeline to
keep you focused;

Representing trends (e.g. growing influence of religion, increasing popular
unrest) as lines running parallel to your timeline of single events for all or part
of the timeframe under examination;

Deciding whether to just put events in chronological order or to give them
more specific dates.
See below a set of example key assumptions relating to a notional coup and the timeline
developed from the assumptions.
FUTURE OUTCOME
By the end of 2014 President Smith of RED and the security chiefs have seized
power unconstitutionally and democracy has been suspended.
KEY ASSUMPTIONS
• President and National Security Council (NSC) would have to fear the imminent
loss of their power and consequences
• President and NSC would need to agree on course of action
• President and NSC would have to retain the loyalty of the security forces and
there would be no large scale/unmanageable social resistance
• President’s party would require sufficient sources of funding to pay security forces
• President and NSC would be able to manage/control the information environment
• President and NSC would need to be confident of withstanding foreign interven-
tion and pressure.
• There would have to be a complete suspension of the rule of law
• The BLUE party would have to be neutralised
45

QW_V1-5_START52  17/3/16  16:04  Page 46
DATA ORGANISATION
CHRONOLOGIES AND TIMELINES  (cont.)
y
ed 
esident 
 seiz
e
v

OUTCOME
r unconstitutionall
and democracy 
chiefs ha
e
w

has been suspended.
Y CHIEFS
By end 2014 Pr
Smith and the security 
po
 
ty, 
rced
y.
o
 
l of
o

 is f
y and 
countr
Oct 2014:
Leader of
to flee the
BLUE par
Jones,
e contr
Sept 2014:
Smith and
Y
security chiefs
tak
judiciar
all ministries.
A
THE SECURIT
 
OCESS
W
Y HAS BEEN SUSPENDED
 
 
C

ws.
 2014:
A
e
ly
Ju

rliament  is 
UNDER
a
suspended.
tial la
P
ne 2014:
COUP PR
oduction of 
  
Ju
mar
and curf
ons.
 
.
Intr
es 
 
y, 
ency
 
e
rces 
 k
y 2014:
o
y
Smith 
state of 
d and 
Ma
declar
e
emerg
y
edit it.
y locations,
ticularl
al personnel.
Y AND DEMOCR
April 2014:
e
y
cle 
Security f
deplo
maintained at 
k
 
par
lo
edit
y international patr
oduce
e
ty to discr
e a coup 
 
 
 
s his
rnment and civil society and 
e
alty
e
ch 2014:
y
ed.
v
s inner cir
vidence’.
White 
v
o
 g

ers and k
Mar
ufactur
 ‘e
y
plot to discr
emo
BLUE and pr
Jan 2014:
Smith’
ty of  factions
man
is in doubt.
whose lo
is r
Smith purg
par
General  
ainst BLUE par
g

ONGOING TRENDS

 
s
egional pla
 
.g.
n
ed 
ONSTITUTIONALL
ig
v
es.
ol of the media,
ty and 
eason
v 2013:
re
ess star
emo
Fo
Minister
om office
charg
s par
ruption).
No
is r
fr
 – tr
Accusations a
Sept 2013:
reign Minister
accused of 
cor
to vilify BLUE’
leadership (e
o
Smith’
state pr
F
easing contr
WER UNC
 
vt
.
Incr
use of influence on r
THE END OF 2014 PRESIDENT SMITH OF RED AND 
.
ty
  
ed out 
 BY 
estern 
of RED
Mid 2013:
W
journalists 
and NGOs
kick
ch 2013:
.
clude other 
le
x
OME:
Mar
e
C
National Security 
Council meetings 
main element of 
 – the BLUE par
National Unity Go
VE  SEIZED PO
s first
OUT
HA
e un-winnab
TRIGGER
Jan 2013:
esident Smith
ces to launch a coup
Pr
ee and democratic
fr

decides that RED’
sour
elections ar
and that he has sufficient
re
46

QW_V1-5_START52  17/3/16  16:04  Page 47
DATA ORGANISATION
CHRONOLOGIES AND TIMELINES  (cont.)  
For a workshop ensure you have the empty timeline drawn up with the future outcome and
timeframe specified so all participants can easily see it. Use a large whiteboard or wall space
with butcher’s paper (long continuous roll of paper). Remember that a whiteboard has the
added benefit of allowing you to make amendments to the timeline.
For a backcasting exercise, have the key assumptions on a poster where everyone can see
them and use them as an aide memoire to develop events and trends for the timeline.
Then simply ask participants to offer event and trend ideas based on their subject matter
expertise. Analysts tend to feel comfortable identifying a trigger first and then working
mostly from the present to the future. However, developing a timeline is a messy business
and there can be a lot of heated discussion and jumping around from one end to the other
to fill in the gaps. Events etc often have to be shifted around as the timeline develops.
The output (i.e. the ideas generated) can be used as indicators to monitor a warning problem.
If they require further examination to see how useful they are, DI FAM recommends you filter
them by a range of criteria (see page 52).
47



QW_V1-5_START52  17/3/16  16:04  Page 48
DATA ORGANISATION
MATRICES 
10 mins
What are they?
A matrix is a grid that can be used to organise, filter, assess and present data systematically.
Matrices are particularly helpful when you need to deal with a lot of information quickly, and
can provide insights into similarities, differences, trends and gaps etc.
Other techniques and approaches to use with it:

The following techniques commonly use matrices:
-
SWOT Analysis (see page 40);
-
KAC (see page 20);
-
ACH (see page 16);
-
Analysis of Warning Indicators/I&W.

Filtering (see page 52): Use matrices to structure and conduct filtering exercis-
es as well as capture and present their results.

Voting and ranking (see page 50): Use matrices to structure and conduct voting
and ranking exercises as well as capture and presenting their results.
Level of effort required
Matrices require minimal effort to prepare and can help analysts working solo or in a group
1
deal quickly with large quantities of data relating to an issue get insights from it quickly.
How to do it
2
Make sure that the matrix is geared towards answering the intelligence question at hand,
i.e. that all the relevant categories or filtering criteria have been identified. Ensure the 
categories or filtering criteria are clearly defined.Then simply populate the matrix.
For workshops, empty matrices can be printed onto A1 paper or drawn onto whiteboards
in advance.They can then be completed during the event.They provide helpful structure
keeping both facilitators and analysts focused and systematic.The table below was used to
help structure part of an ‘over-the-horizon proliferation’ conference workshop examining
what prompts states to develop or acquire WMD and taking these drivers to identify and 
48

QW_V1-5_START52  17/3/16  16:04  Page 49
DATA ORGANISATION
MATRICES (cont.)  
COUNTRY
APPLICABLE
POTENTIAL
POTENTIAL
INTELLIGENCE
CANDIDATE
DRIVER
TIMEFRAME
DELIVERY
GAPS
FOR A
FOR A
MEANS
CAPABILITY
CAPABILITY
• Feels threatened Less than 3 years if  Ballistic missiles
Circumstances under 
• Wants prestige
purchased. At least 
and aircraft
which GREEN would 
RED
• Conventional
10 years if developed
supply RED with 
imbalance
nuclear weapons.
• Ego of leader
How serious is BLUE’s 
• Paranoia of 
Less than 3 years
Aircraft
leadership about a
BLUE
leader
nuclear weapons
• Bargaining chip
option?
• Changing alliances
Are YELLOW’s  economic 
• Feels threatened Less than 5 years
Ballistic and cruise
and political ties more 
YELLOW
• Keep the military
missiles
important to it than  
happy
security concerns?
assess candidate states that may choose to start WMD programmes over the next 10 years.
3
On your own at the desk create simple tables or – if there is a lot of data which needs to
be filtered - spreadsheets.The structure a matrix provides can also help an analyst present
that data coherently in a report or presentation. Below is a table from the post-workshop
report relating to the same over-the-horizon proliferation event already mentioned. It was
used to present the results of a voting and ranking exercise conducted by one of the work-
shop sub-groups to identify candidates of most concern.
VOTES FOR
VOTES FOR
‘OF MOST
‘CANDIDATES
‘GREATEST 
COUNTRY
TOTAL 
CONCERN
MOST AT RISK  OF
POTENTIAL 
CANDIDATE
VOTES
TO NATO’
BECOMING
THREAT TO 
RANKING
A PROLIFERANT’
NATO INTERESTS’
RED
6
6
12
1
BLUE
5
6
11
2
GREEN
5
3
8
3
YELLOW
4
3
7
4
PURPLE
4
3
7
4
WHITE
2
2
4
6
BLACK
0
3
3
7
BROWN
2
0
2
8
ORANGE
0
2
2
8
49



QW_V1-5_START52  17/3/16  16:04  Page 50
DATA ORGANISATION
VOTING AND  RANKING
<20 mins
What is it?
Voting and ranking is used to get analysts to quickly ascertain the top 5 or 10 (or however
many) subjects out of many or to obtain agreement rapidly.
Other techniques and approaches to use with it:

Structured brainstorming (see page 12): Use voting and ranking to reduce ideas
to those deemed most important or to obtain agreement rapidly.

Cone of Plausibility (see page 28): Use voting and ranking to determine the most
important key drivers if more than 7 or to help decide which assumptions to
change.

SWOT Analysis (see page 40): Use voting and ranking to identify the most
important strengths, weaknesses etc.

Matrices are excellent for structuring and presenting the results of voting and
ranking exercises (see page 48).
Level of effort required
A group of 6 – 8 analysts (see ‘Event planning rules of thumb’ on page 63 for more 
information on group size) could vote on and rank a limited set of ideas in under 20 mins 
with pre-prepared voting slips and an empty matrix to tally votes.
How to do it
1
For generic guidance on planning and running a simple analytical event, see ‘Analytical Events’
on page 63.
2
Prior to a workshop:

make sure the question to be voted on is clear;

prepare voting slips with the question on them and make clear how many
answers are required. See the example below;

have an empty matrix pre-printed onto A1-size paper or draw one onto a
whiteboard. For a single vote on an issue, a three column table like the one
shown below is ideal to capture the results.
50

QW_V1-5_START52  17/3/16  16:04  Page 51
DATA ORGANISATION
VOTING AND RANKING (cont.)  
During the workshop the facilitator should:
3

prior to the vote enter the items (e.g. drivers) to be voted on into the left
hand column of the matrix so participants can see what they are selecting
from;

give each participant three silent votes to cast - this means no discussion is
allowed between participants.They are not required in any particular order of
importance, but participants can only vote for each item once;

ask participants to hand in their voting slips when finished;

enter the voting results into the second column of the matrix;

create a ranking order to show which items received the most votes through
to which received the least;

select however many items required (e.g. the top 3, 5 or 10).The example
shown below relates to a Cone of Plausibility exercise which requires 5 – 7
drivers. In this case it would probably be appropriate to take forward the top 6,
unless the participants felt further discussion was required.
VOTING SLIP
Question: What are the three most important key drivers
with regard to Country X’s stability in 2015?
1.  . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
2.  . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
3.  . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
NUMBER  RANKING
KEY DRIVERS
OF VOTES
ORDER
Leadership
12
1
Insurgency
11
2
Armed forces
8
3
Oil production
7
4
Population growth
7
4
Religious minorities
4
6
Relations with neighbours
3
7
Technological developments
2
9
Foreign aid
2
9
Agriculture
2
9
51



QW_V1-5_START52  17/3/16  16:04  Page 52
DATA ORGANISATION
FILTERING
30 mins
(10 items to filter)
What is it?
Filtering involves assessing a set of data against a range of criteria relevant to answering the
question.You might filter items to determine which are the most useful, most important,
most reliable or the most likely etc.
The following techniques involve filtering:

KAC (see page 20):This involves filtering key assumptions underpinning an
‘analytic line’ as ‘solid’, ‘with caveats’ or ‘unsupported’ to ascertain the accuracy
of an assessment.

Analysis of warning indicators/I&W, involves filtering indicators identified by 
relevance to the question, uniqueness, observability, timeliness etc to establish
their usefulness.

ACH (see page 16):This involves assessing or filtering evidence in different 
ways (e.g. by reliability/credibility) and against specific hypotheses to identify 
discriminating evidence and ascertain which hypotheses are more or less likely.
However, you may wish to develop your own set of bespoke filtering criteria depending on
the question at hand.

Matrices are excellent for structuring and conducting filtering exercises as well
as capturing and presenting their results (see page 48).
52

QW_V1-5_START52  17/3/16  16:04  Page 53
DATA ORGANISATION
FILTERING (cont.)
Level of effort required
The level of effort depends on the number of items to be filtered and the number of filtering
criteria. But assuming a matrix with items to be filtered and the filtering criteria have been
pre-prepared, a group of 6 – 8 analysts (see ‘Event planning rules of thumb’ on page 63 for
more information on group size) working fast could filter a maximum of 10 items against 
3 – 5 criteria in 30 – 45 mins. An analyst working alone could do this filtering faster. Filtering
helps you structure your thinking on an issue and get some quick insights.The output of  a
filtering exercise can also help you present data and insights drawn from it coherently and
quickly in a report or presentation.
How to do it
Identify all the items (e.g. indicators or key assumptions depending on the question and
1
approach you are using) that are to be filtered. Use filtering criteria appropriate for the
intelligence question at hand. If bespoke, ensure the filtering criteria and associated scoring
systems are simple, well thought through and clearly defined (see the scoring system in the
WMD indicator example below).
Enter the items to be filtered on the vertical axis (i.e. the left hand column) of a matrix and
2
the filtering criteria along the horizontal axis (i.e. the top row). See notional example below.
Starting with the first item, filter it applying all the criteria and carry on until you have
worked through all the items. Keep referring to the scoring definitions to try to ensure your
approach is consistent throughout.
Review the results to draw conclusions about which of your items are the most useful, the
3
most important, most reliable or most likely etc. See what insights the results provide with
regard to the question at hand.
Below is an example of a filtering exercise which was used to assess indicator ideas (generated
4
by brainstorming) that might be useful in alerting analysts to a state’s decision to restart a
WMD programme.The following set of criteria (and associated simple scoring system) was
used to ascertain the most useful indicators to monitor:

Timeliness – A measure of how early an indicator relevant to RED’s decision to
restart WMD work would appear. On a scale of 0 – 3, with 0 denoting that the
indicator would arrive too late and 3 denoting that it would appear very early.

Diagnosticity – A measure of how much an indicator reveals only uniquely
WMD work by RED. On a scale of 0 – 3, with 0 denoting that the indicator is
totally ambiguous and 3 denoting that it is unique only to renewed WMD work
by RED.

Observability – A measure of how likely we would be to observe an indicator
given collection capabilities and priorities. On a scale of  0 – 3, with 0 denoting
will not be seen/cannot collect against it and 3 denoting collectable with 
reasonable effort.
53

QW_V1-5_START52  17/3/16  16:04  Page 54
DATA ORGANISATION
FILTERING (cont.)
The results of the filtering exercise show that indicators:

B, E, G, H and I are not useful as they are simply not observable, diagnostic or
timely  (i.e. score 0) and can therefore be discarded;

A and F may be useful as they score highly on some criteria but not very highly
on others; and

C, D and J are most useful as they score quite highly on a combination of the
criteria.
INDICATOR
TIMELINESS DIAGNOSTICITY OBSERVABILITY USEFUL ?
A
1
3
3
Maybe ?
B
2
2
0
No
C
3
2
2
Yes
D
2
3
3
Yes
E
0
1
0
No
F
2
3
1
Maybe ?
G
3
0
2
No
H
1
1
0
No
I
2
0
1
No
J
3
2
3
Yes
For workshops empty matrices (including the items to be filtered and the filtering criteria) can
be printed onto A1 paper or drawn onto whiteboards in advance.The matrices can then be
placed where all participants can see them during the filtering.They provide helpful structure
keeping both facilitators and analysts focused and systematic in completing the exercise.
Make sure that definitions of the filtering criteria and corresponding scoring systems are
also on posters where all can see and refer to them as required.
On your own you can create simple tables or – if there is a lot of data which needs to be
filtered – a spreadsheet.Work through the items and the filtering criteria as described
above.
54

QW_V1-5_START52  17/3/16  16:04  Page 55
COMMUNICATING UNCERTAINTY
COMMUNICATING UNCERTAINTY
“I told him that my personal estimate was on the dark side, namely that the
odds were around 65 to 35 in favour of an attack. He was somewhat jolted
by this; he and his colleagues had read “serious possibility” to mean odds
very considerably lower. Understandably troubled by this want of communi-
cation, I began asking my own colleagues on the Board of National Estimates
what odds they had had in mind when they agreed to that wording. It was
another jolt to find that each Board member had had somewhat different
odds in mind and the low man was thinking of about 20 to 80, the high of 80
to 20. The rest ranged in between”.
Words of Estimative Probability (Sherman Kent, 1964)
Introduction
The accurate communication of uncertainty is one of the most important elements of good
intelligence assessment.When considering a course of action, policymakers must set its likely
benefits against its likely costs: if they do not have a clear idea of the probability of various
outcomes, the wrong decision might be made. Below are issues you should consider and 
guidance you should use when expressing probability and uncertainty.
55

QW_V1-5_START52  17/3/16  16:04  Page 56
COMMUNICATING UNCERTAINTY
THE PROBLEM OF COMMUNICATING UNCERTAINTY
There are two key challenges to the analyst when communicating uncertainty: misinterpreta-
tion and misrepresentation.
Misinterpretation
A significant challenge to communicating uncertainty is the risk of misinterpretation as there 
are no widely-understood common definitions of probabilistic terms. One study showed that
among a group of NATO military officers – with experience of reading intelligence reports –
interpretations of the word ‘probable’ varied from 25% to 90% in terms of their understanding
of the likelihood of an event taking place. This kind of finding, which has been replicated on a
number of occasions, exposes a serious risk of misunderstanding by readers of intelligence
assessments.
Misrepresentation
In the absence of a common definition, readers of intelligence assessments may go on to 
re-draft or re-represent the assessment (for example, to abbreviate it for a more senior
consumer or indeed the general public) and thereby lose or misrepresent the sense of the
original assessment.
“If intelligence is to be used more widely by governments in public
debate in future, those doing so must be careful to explain its uses
and its limitations”.
The Butler Review
56


QW_V1-5_START52  17/3/16  16:04  Page 57
COMMUNICATING UNCERTAINTY
THE UNCERTAINTY YARDSTICK
In response to the challenges set out above, DIAS
mandates the use of a standardised lexicon of
terms – the Uncertainty Yardstick – expressing
probability and uncertainty. It assumes familiarity
with the basic concepts of probability and uncer-
tainty (i.e. what it means to say something like ‘it
is 25% likely that RED has an active nuclear 
programme’). It also assumes that the analyst has
arrived at a probabilistic judgement using a robust
method.
As this table suggests, if you believe that a certain
hypothesis is 75% likely you should describe it as
‘very probable’ or ‘highly likely’. Clearly, a 
standardised approach to probabilistic language is
only useful so long as readers of our assessments
understand it. Consequently, all DIAS products
which use the Yardstick should reproduce it,
ideally near to the ‘Assessment Base’ and
‘Methodology’ boxes.
You will note that the Yardstick appears to have
gaps (‘what about 72%?’). This is a deliberate
decision to avoid a false impression of accuracy.
If your assessment is robust enough to make a
fine-grained distinction between 70% and 72%,
then it probably makes sense simply to state the
figure itself.
Assessment first, language second
You should ensure that your assessment of probability comes first, and then the language
is chosen (from the Yardstick) to align with that assessment. Doing it the other way round
– in other words, deciding that something ‘feels like’ a ‘realistic possibility’ and deciding it’s
therefore 25-50% likely – is not a robust method of arriving at a probabilistic judgement.
Alternatives to the Yardstick
There are only a few circumstances in which use of the Yardstick is inappropriate. These
would include, for instance, an assessment in which the conclusion is sensitive to the 
difference between ‘one in three’ and ‘one in two’ (or 33% and 50%), in a way that the
Yardstick cannot capture (both would be described as ‘realistic possibilities’). If the
Yardstick is not sufficiently gradated to capture the details of your assessment, then it is
very important to state what alternative lexicon you are using in its place.
Probabilistic language in non-DIAS product
The Yardstick is not based on any external standards of probabilistic language. Instead it is
a standard that aligns to some extent with survey data on how readers tend to interpret
such terms. Although other organisations sometimes use standardised interpretations of
probabilistic terms, there is no guarantee they will correspond to those on the Yardstick.
57

QW_V1-5_START52  17/3/16  16:04  Page 58
COMMUNICATING UNCERTAINTY
Precision about time-frame
Another important aspect of communicating judgements concerning uncertainty is to make
sure the parameters of the judgement are stated explicitly. Statements about the future should
clearly state the time-frame of the judgement (e.g. say ‘2020’ rather than the ‘long term’)   
Conditional probabilities
You also need to be very clear when you are expressing conditional probabilities – in other
words, probabilities of an event occurring given that some other event has already occurred.
For example:
If RED invades ORANGE, there is a realistic possibility that retaliation will involve the
use of chemical weapons
makes clear that the judgement about ORANGE’s use of chemical weapons only applies to
the circumstance of an invasion by RED, leaving open the possibility of other circumstances
in which chemical weapons might be used.
Modal language
You should always avoid so-called ‘modal’ language in the context of a probability judgement.
This includes terms such as ‘can’, ‘could’, ‘might’ and ‘may’, but also ‘possible’ (except in the
form ‘realistic possibility’). These are sometimes used as the probabilistic equivalent of
‘weasel words’: they appear to make a judgement about probability, but all they do in fact is
state that something is not impossible, which could imply a probability of 1% or 100%.
Of course, terms such as ‘may’ and ‘could’ do serve an important purpose by reminding the
reader that something is possible. For example:
RED could withdraw from the NNPT within four months
There is no reason to avoid them in this context. But analysts should avoid disguising
statements about mere possibility as probability judgements, for example:
RED could choose to mount punitive operations against ORANGE facilities, which
might prompt an escalation to non-conventional warfare, possibly involving…
Avoiding false precision
One common argument against the use of a standardised interpretation of probabilistic
language is that it is impossible to be sufficiently precise when making judgements about
inherently unpredictable events in the political or military sphere. But this is an argument in
favour either of improving our methods of making such assessments, or of being explicit
about our lack of information. If you have no idea of the probability of an event then making
statements such as:
President Jones will probably try to extend his term of office in 2010.
These will give a misleading impression of precision. It would be more accurate in these 
circumstances to say:
President Jones might try to extend his term of office in 2010, but we do not have suf-
ficient insight into his decision-making to judge how likely this is.
58

QW_V1-5_START52  17/3/16  16:04  Page 59
COMMUNICATING UNCERTAINTY
‘Confidence’ and probability
The relationship between confidence and probability is a minefield of possible confusion.
The word ‘confidence’ is sometimes used as a synonym for probability when making 
judgements about the likelihood of hypotheses, as in:
We are very confident that RED has an active nuclear programme.
This probably means no more than:
RED is highly likely to have an active nuclear programme.
However, ‘confidence’ is sometimes also used to express an analyst’s judgement about the
overall robustness of their assessment, as in:
We are moderately confident in our judgements about RED’s technical capabilities, but
less so in our judgements about their intentions
The possibility of confusion is compounded by the fact that other organisations actively 
recommend using confidence-based terms to express probabilities. However, DIAS 
recommend that ‘confidence’ is never used as a term expressing probability, whether it is
the probability that a hypothesis is true, or the probability of an event occurring: instead,
the Yardstick terms are the preferred lexicon. In addition, if analysts recognise that the
information underlying an assessment has flaws, it is better to incorporate this into their
stated probabilities (essentially by downgrading them) and to outline the reasons either in
the text or the Assessment Base and Methodology Boxes. So, for example, we do not rec-
ommend having statements that discuss confidence and probability separately such as:
We have very limited confidence in our judgements about President Jones’s intent and
decisionmaking.
and
President Jones is almost certain to amend the constitution this year to extend his
term of office.
Instead, you should consider that, if our knowledge of Jones’s desires is indeed limited, the
term ‘almost certain’ is probably misleading. You might therefore want to consider some-
thing closer to the following:
Our knowledge of Jones’s intent and decisionmaking is based on fragmentary intelli-
gence from sources on the periphery of his network of advisers.
and
President Jones is likely to amend the constitution this year to extend his term of
office.
This better reflects our lack of knowledge about Jones.
59

QW_V1-5_START52  17/3/16  16:04  Page 60
COMMUNICATING UNCERTAINTY
Avoiding institutional biases
Your assessments should be guided only by the strength of the evidence. You should never
change your assessments because of pressure to make something more readable or atten-
tion-grabbing, or to compensate for a perceived lack of interest among customers. Similarly,
do not describe something as more uncertain than it actually is, in response to a concern
that you may be subject to scrutiny if it does not occur, or proves to be false. For example,
if the evidence suggests that an outcome is almost certain, there is still potentially a one-in-
ten chance that it won’t occur. You should not therefore change your assessment from
‘almost certain’ to (for example) ‘probable’ on the basis that if the event does not occur
they will appear ‘less wrong’.
The Professional Head of Defence Intelligence Analysis (and ultimately the Professional Head
of Intelligence Analysis in the Cabinet Office) will strongly defend analysts’ impartiality in
matters such as these. For the analyst’s part, it is essential that the evidential basis for all
assessments is available, either in released products, or in an evidential audit trail that can 
be called upon if necessary.
60

QW_V1-5_START52  17/3/16  16:04  Page 61
CAPTURING EXPERT JUDGEMENT
CAPTURING EXPERT JUDGEMENT
“An expert is someone who has succeeded in making decisions and judgements
simpler through knowing what to pay attention to and what to ignore.” 
Edward de Bono
Introduction
Capturing expert judgement in order to answer an intelligence question can be done in a
variety of different ways: in an event or workshop, offline (i.e. remotely) or a presentation
and discussion format.
61

QW_V1-5_START52  17/3/16  16:04  Page 62
CAPTURING EXPERT JUDGEMENT
TOP TEN TIPS FOR SUCCESSFUL ANALYTICAL EVENTS
Nail the question down
Make sure you are absolutely clear about the intelligence question you are tackling. Selecting the methodol-
ogy before the question is clear is likely to give you a confused and unsatisfactory output.
Prepare a workshop plan
This should be developed once the question is clear and the analytical approach is selected.The checklist
overleaf shows you what your plan should cover.
Keep the event simple
Activities always take longer than you think because most people love to talk about their subject area, and
free discussion is important. Prioritise and limit the number of tasks to the one or two most important ana-
lytical questions, and do detailed collation or follow-up work offline before or after the event.
Limit the number of participants
Limit participants to those best-equipped to help answer the question at hand.Avoid ‘observers’, hangers-on
and people new in post as they are a hindrance.Your event is for focusing effort on producing an output, and
not for networking or easing people into their subject. Lots of participants make for a long, complex event
with sub-groups and plenary sessions. See overleaf for some realistic timing guidelines.
Manage the type and mix of participants
Select the right mix of people to answer the question at hand.The majority should be SMEs with core
experience such as DIAS colleagues, representatives from other government departments and intelligence
collectors (particularly when specific intelligence reports are to be discussed such as in ACH or KAC exercis-
es) . Balance sub-groups by expertise and personality prior to the event.This will produce better results than
just making an arbitrary decision on the day, or worse still, inviting people to choose their own subgroup.
Maximise the use of time and expertise
This can be done by dividing up participants and tasks. People always generate more ideas than you expect.
Have a strategy in place to get them to agree quickly which ideas are most important or require further
exploration. Use simple voting systems to identify the top five or ten subjects (e.g. issues, drivers, countries)
out of many. Give each participant three silent votes and then add them up to create a ranking order.
Prepare props
These save a lot of time and help both facilitators and participants stay focused.They can be hand written
on flipchart paper or pre-printed onto large posters for high profile events.The following props are useful for
nearly all events: the question to be answered and sub-questions; definitions; workshop timetable; environ-
mental scanning categories (e.g. STEMPLES); and pre-printed matrices for completion.
Hold a pre-meeting
Give your event plan to the customer and facilitators to consider.Then hold a meeting to discuss every
aspect of it to clarify and iron out potential problems. For example: establish the role of any seniors attend-
ing; ensure it is clear which participants are in which group; clarify the role of facilitators and scribes; and run
through all the plan elements to test them.
Prepare participants
Prior to an event, send participants a background note laying out: the background to and rationale for the
event; the event aim; how the event fits into any wider analytical project; an explanation of the techniques and
approaches to be used; how the event will be run; what will be expected of participants; and information
about the event output.
Produce a tangible event output
To ensure that useful insights are captured and can be used to inform your analysis, write up the workshop
discussions in a report. Circulate a draft to participants to check whether it is an accurate reflection of the
event. Using analytical techniques to tackle an analytical question provides a clear intellectual audit trail for
judgements reached and a useful framework for producing formal DI or informal reports.
62

QW_V1-5_START52  17/3/16  16:04  Page 63
CAPTURING EXPERT JUDGEMENT
ANALYTICAL EVENTS
Analytical workshops can generate useful insights by bringing together a group of mainly
SMEs to work as a team to tackle a clear intelligence question in a focused, structured and
systematic fashion. However, even simple events need to be thought through properly and
some key principles followed, otherwise the output will be less than satisfactory and valuable
time will be wasted.These principles, which are laid out below, should speed up and ensure
event preparation is thorough. Use them in conjunction with the ‘How to do it’ guidance
relevant to each technique or approach.The guidance was developed for a range of analytical
events (from short and simple to long and complex), and the vast majority of it is applicable
across the board regardless of event type (e.g. Nail down the question).
Event planning rules of thumb
All events have some common elements, such as introductions, main work, wash ups.
As participant numbers increase and more than one sub-group is required, further work-
shop elements (brief-back and discussion) become necessary.The planning rules of thumb
aim to make analysts aware of these issues and give them a rough idea of how long an event
is likely to take.This is important as analysts commonly underestimate the time required to
tackle an intelligence question in a workshop environment.
The references to a group or groups of 6 – 8 participants in the diagram directly below (and
throughout this guidance booklet) reflects DI FAM’s experience that this number of SMEs in
a single group works best. More than 8 becomes very hard to facilitate effectively and is not
recommended, whilst fewer than 8 can mean a less dynamic event with fewer ideas.
However, if time is really short running a quick informal workshop with a few colleagues
(no fewer than four) is better than not running one at all.
WORKSHOP 
1 GROUP OF 6–8 
2 OR 3 GROUPS OF 
ELEMENTS
PARTICIPANTS
6–8 PARTICIPANTS
Facilitators required
1 – 2
2 – 4 (short events)
5 – 7 (long events)
Intro, break, wash up
30 – 45 mins
30 – 45 mins
Main work
60 – 105 mins
90 – 120 mins
Briefs-back and discussion

45 – 60 mins
Total time
90 – 150 mins
165 – 225 mins
63

QW_V1-5_START52  17/3/16  16:04  Page 64
CAPTURING EXPERT JUDGEMENT
Event preparation checklists
Even simple events require the analyst to make a range of preparations, including preparing a
workshop plan.The checklists below will help you do this quickly and efficiently.
TO PUT IN YOUR WORKSHOP PLAN
Purpose of event 
Location
Start time and length
Participant list
Composition of sub-groups
Event structure
Analytical  techniques to be used
Ground rules for discussion
Facilitation requirements
Role of lead facilitator
Scribing requirements
Time-keeping responsibilities
Responsibility for briefing back results 
Role of sponsor / customer
TO DO PRIOR TO THE EVENT
Select and book suitable rooms 
Organise refreshments
Complete visitor forms and bookings
Decide who is to meet and escort visitors
Test the plan at a pre-meeting
Send background note to participants
Prepare facilitators 
Prepare back-briefers
Prepare rooms in advance
64

QW_V1-5_START52  17/3/16  16:04  Page 65
CAPTURING EXPERT JUDGEMENT
OFFLINE
Instead of running an analytical event you may decide to seek the views of other internal
and external SMEs remotely by email. Such offline work has the benefit of allowing SMEs 
to be more considered in their inputs and saves the analyst the effort of organising a 
workshop. Set against this is the fact that the inputs may take time to trickle in, will need
to be collated, interpreted and de-conflicted and may not generate all the insights that can
fall out of an analytical event involving lively discussion and debate.
Much depends on the nature of the question and the technique, but DI FAM recommends
seeking any input in a clear, structured and consistent way from the selected SMEs.This
includes laying out for the SMEs the intelligence question, the background to it, an 
explanation of the techniques or approaches and the task you have set them (including any
definitions, scoring systems etc). A questionnaire may be appropriate or a matrix with
boxes to complete. For example, you may have generated indicators for a particular 
warning problem in a workshop and you wish to have a range of SMEs filter them offline 
to ascertain those that are most useful. See the example under Filtering on page 52 under
Data Organisation Techniques.
65

QW_V1-5_START52  17/3/16  16:04  Page 66
CAPTURING EXPERT JUDGEMENT
PRESENTATION AND  DISCUSSION 
A well thought through presentation and discussion - based on the interim or final results of 
an analytical exercise to tackle a question - can be a good half-way house between an analytical
event and offline work.This format can be really useful for soliciting the views of SMEs without
the responsibility of organising and facilitating an analytical workshop and can mitigate the
common problem of unfocused discussions (e.g. during conference sessions) that generate 
few or no new insights. It can allow analysts to delve deeply into an issue in a structured way
and thus be productive and enjoyable for all involved. Application of a technique also provides
the analyst with a clear framework to help explain to other SMEs how they reached their 
conclusions.
For analysts wishing to adopt this kind of approach for conferences or other fora, DI FAM 
recommend using a suitable technique to explore a question of interest and then generate
some conclusions to form the basis for a discussion. For example, conduct a SWOT analysis
on an organisation’s situation with regard to a particular objective or use the Cone of
Plausibility to generate some scenarios on a topic of interest and then present the findings 
to SMEs with a view to seeking their insights on your results.
The results could be presented quite formally (e.g. using Powerpoint) or informally (e.g. hand
drawn on flipchart paper or a whiteboard). Depending on the time you have available and the
importance and complexity of the question, you may wish to share your results with the SMEs
prior to the presentation/discussion to give them additional time to consider your conclusions.
When presenting your results by email or in person make sure that you clearly identify the
question and the background to it.You also need to explain the techniques, approaches and 
any definitions used, and lay out what you wish the SMEs to consider (e.g. any key uncertainties
or the intelligence gaps).
66


QW_V1-5_START56  18/4/16  10:24  Page 67
PROFESSIONAL HEAD OF DEFENCE INTELLIGENCE ANALYSIS
Defence Intelligence
Futures and Analytical Methods Team
Project and Workshop Breakdown of
Responsibilities
Introduction
DI’s Futures and Analytical Methods Team (FAM) provide different levels of support to analytical projects. 
This document defines those levels of support, to ensure effective collaboration between the customer
and the FAM Team. The appropriate level of support is determined by the nature of the analytical project,
the knowledge and experience of the customer, the FAM Team’s capacity and DI FAM’s responsibility to
build analytical capability throughout DIAS.
Definitions
The official from MoD or an OGD, who will be the end reader and has comissioned
CUSTOMER
the work via the intelligence analyst. 
The individual or individuals who commission the analytical project.  This is usually
SPONSOR
a DIAS analyst, but in some circumstances will be the customer for the work.
A discrete piece of analysis, conducted using a structured approach, which will
ANALYTICAL
answer an explicit intelligence question to support a decision by a specified cus-
PROJECT
tomer. An analytical project should always result in a written output of some kind.
An event involving multiple subject matter experts, which provides all or some of
WORKSHOP
the input to an analytical project.
Levels of Analytical Support
TYPE OF
DESCRIPTION
SUITABILITY
SUPPORT
The FAM Team will provide advice to the 
Consultative support is available 
Consultative
sponsor about the analytical project, usually
to any sponsor.
focussing on planning and running a workshop.
In this model the FAM Team and the sponsor
Collaborative support is provided 
will jointly run any workshops. The FAM Team 
to sponsors who are less confident 
are more involved in designing and facilitating 
about running an analytical event.
Collaborative
the analytical project, but the sponsor retains 
It is designed to build the confidence 
responsibility for capturing the output of the 
of the sponsor, so that they will be 
workshop and administering it.
able to run analytical projects more 
independently in the future.
This is the most intensive level of support 
Comprehensive support will be 
provided by the FAM Team. In this model, 
provided only for analytical projects 
Comprehensive
we help the sponsor to design the analytical 
of the highest organisational
workshop, provide raw output from the 
priority or policy sensitivity.
workshop and provide advice on the 
presentation of the output.
67

QW_V1-5_START57  18/4/16  10:36  Page 68
Division of responsibilities for workshops
An analytical event involves considerable preparation and it is important not to leave things to the last
minute or you will put your event at risk.  To ensure your preparation is thorough, you should work
through the following checklist with FAM and agree on the division of responsibilities.  Depending on
the nature of FAM’s involvement, the list of responsibilities associated with a project and any events
is shown below. 
LEVEL OF SUPPORT
STAGE
TASK
TIVE
TATIVE
CONSUL
COLLABORA
COMPREHENSIVE
Sponsor
Sponsor, Customer Sponsor, Customer
Refine key project questions 
and FAM
and FAM
and FAM
ANALYTICAL
Sponsor
Sponsor
Sponsor
Decide how to structure any event
and FAM
and FAM
and FAM
STRUCTURE
Sponsor
Sponsor
Sponsor
Decide on length and size of any event
and FAM
and FAM
and FAM
Sponsor
Identify and book suitable room(s)
Sponsor
Sponsor
and FAM
Sponsor
Identify and invite attendees
Sponsor
Sponsor
and FAM
Sponsor
Draw up a detailed workshop plan 
Sponsor
FAM
and FAM
Sponsor
Produce and circulate background note
Sponsor
FAM
EVENT
and FAM
PREPARATION 
Organise refreshments
Sponsor
Sponsor
Sponsor
(if necessary)
Sponsor
Produce supporting materials
Sponsor
FAM
and FAM
Sponsor
Prepare the room(s) in advance 
Sponsor
FAM
and FAM
Host event pre-meeting
Sponsor
Sponsor
FAM
Book in visitors
Sponsor
Sponsor
Sponsor
Introduce Event
Sponsor
Sponsor
Sponsor
Sponsor
Lead facilitation 
Sponsor
FAM
and FAM
Escort visitors
Sponsor
Sponsor
Sponsor
EVENT
Sponsor
FACILITATION
Identify and provide assistant facilitators
Sponsor
FAM
and FAM
(if necessary)
Sponsor
Time-keep and capture output
Sponsor
FAM
and FAM
Sponsor
Sponsor
Design subgroups  
Sponsor
and FAM
and FAM
Sponsor
Sponsor
Identify backbriefers
Sponsor
and FAM
and FAM
Transcribe raw output 
Sponsor
FAM
FAM
Sponsor
Produce workshop report
Sponsor
Sponsor
FOLLOW-UP
and FAM
Sponsor
Sponsor
WORK
Produce and review resulting product
Sponsor
and FAM
and FAM
Provide feedback on project
Sponsor
Sponsor
Sponsor
68

QW_V1-5_START52  17/3/16  16:04  Page 69

QW_V1-5_START52  17/3/16  16:04  Page 70
All assessments have the following basic structure:
Facts A, B, C... are true (Evidence base)
If facts A, B, C... are true , then conclusions X,Y, Z are true (Argument).
Therefore conclusions X,Y, Z  are true (Key judgements)
One simple test to see whether an assessment holds water is to ask
yourself if it’s possible for the evidence base to be true, but for the key
judgements to be false. If you can think of a way this could happen, then
the conclusions are too strong for the evidence
Date of issue: Nov 2016
(C) UK MOD CROWN COPYRIGHT 2016
v2.0