LINEBURG


<< . .

 7
( 9)



. . >>



• When S receives request (KeyGen, sid) from Fpke , S runs gen(1k ) to obtain public
encryption key e and private decryption key d. It then returns e as well as E = enc(e, ·)
and D = dec(d, ·) to Fpke .

65
Assume there is an environment Z that can distinguish between interactions. Without
loss of generality, we assume that in each execution, there are n parties asked to create
encryption keys and Z asks for each key to encrypt exactly p messages. We use Z to
construct the following algorithm G:

R R
1. G receives a public key e and chooses numbers j ← {1, ..., n} and h ← {1, ..., p}.

2. G simulates a copy of Z and simulates its interaction with a system running πS and
A.

3. When Z activates party Pj with input (KeyGen, sid), G has Pj output the value e from
G™s input. (All other parties run gen when asked to generate a key).

4. At ¬rst, when Z instructs a party to encrypt a message m(l,i) under ePl (where ePl is
Pl ™s encryption key), the encrypter outputs c(l,i) = enc(ePl , m(l,i) ). (For l = j, output
enc(e, m(j,i) )).

5. At the h-th request to encrypt a message under e (m(j,h) ), G outputs (m0 , m(j,h) )
to its challenger for the IND-CCA2 game, where m0 is a ¬xed message in D. Upon
receiving back challenge ciphertext c— , it has the encrypter output c(j,h) = c— as the
encryption of m(j,h) .

6. Subsequently, when Z instructs a party to encrypt a message m to under ePl , G has
the encrypter output enc(ePl , m0 ), where m0 is the same ¬xed message as before. (For
l = j, output enc(e, m0 )).

7. Whenever a party Pl is activated with input (Decrypt, sid, c) where c = c(l,i) for some i,
G lets Pl output plaintext m(l,i) (even for those cases where the c(l,i) = enc(ePl , m0 )).
If Pl = Pj and c = c(l,i) for all i, then Pl responds with dec(dPl , c). If l = j but
c = c(j,i) for all i, then G sends c to the decryption oracle and has Pj output the
returned plaintext m.

8. When Z outputs bit b and halts, G outputs b and halts as well.

We now argue that G succeeds at guessing b with non-negligible probability using a
hybrid argument. Let Z(φ) denote the output of Z after observing a simulated execution
in which case, Z sees ciphertexts c1 , ...., cφ’1 , cφ , cφ+1 , ...cnp’1 , cnp where ci for i ≥ φ is the

66
encryption of ¬xed message m0 under the appropriate encryption key. Note that when
all simulated parties (Z+Pl ™s) have the same randomness, the messages requested for en-
cryption, up to the φth message, will be the same for Z(φ) and Z(φ + 1), but subsequent
encryption requests may not be the same.
By our original assumption, Z is able to distinguish between the ideal process (Z(1))
and the real world (Z(p + 1)) with non-negligible probability ; this implies | Pr[Z(1) =
0] ’ Pr[Z(p + 1) = 0]| > . Without loss of generality, let us say that in fact


Pr[Z(1) = 0] ’ Pr[Z(p + 1) = 0] > .


This implies there exists a φ such that


Pr[Z(φ ) = 0] ’ Pr[Z(φ + 1) = 0] > .
np

Assume G guesses (j, h) such that m(j,h) is the φ th message. This means that if G was
given back cφ = enc(e, m0 ), then G will output Z(φ ). Conversely, G will output Z(φ + 1)
if cφ = enc(e, m(j,h) ). Thus when m(j,h) is the φ th message, G will output b = b with
1 1
probability + 2np . Because j, h are random, we hit φ with probability and so G
2 np
1
guesses b with probability + , which is a non-negligible advantage.
2(np)2
2

The existence of G is a contradiction of S™s IND-CCA2 security. It must then be that
there does not exist an Z that is able to distinguish between interactions in the ideal
process with Fpke and the real world with πS . Thus, for a local, stateless IND-CCA2
secure encryption scheme S, πS securely realizes Fpke .


6.2 Signatures

As before, the adversary™s powers are strictly weaker in our new formulation of Fsig (Figure
5-3) as compared to the previous one (Figure 4-3 from [15]), implying that a local stateless
scheme must be at least EUF-ACMA to realize Fsig . In this case, however, we can go
one step further and show that any realizing scheme must be at least a strong signature
scheme. We also show that strong signature schemes may be used to securely realize this
new formulation of Fsig .

De¬nition 20 De¬ne protocol πΣ using signature scheme Σ as follows

67
1. When activated, within some Pi and with input (KeyGen, id), run algorithm gen, out-
put the veri¬cation key v and record the signing key s.

2. When activated, within Pi and with input (Sign, id, m), return sig(s, m).

3. When activated, within some party Pj and with input (Verify, id, v , m, σ), return
ver(v , m, σ).



Theorem 5 Let Σ = (gen, sig, ver) be a signature scheme over domain D composed of
local, stateless algorithms. Then πΣ securely realizes Fsig with respect to domain D and
non-adaptive adversaries if and only if Σ is strong (De¬nition 6).


Note that neither the Fsig formulation nor the strong security de¬nition requires that
veri¬cation be a deterministic process. We allow for probabilistic veri¬cation algorithms in
order to give as general a theorem statement as possible “ in practice, however, deterministic
veri¬cation algorithms are almost always used. Strong security itself, does imply a sort of
veri¬cation “consistency” already; if it were possible for an adversary to produce a message
and signature where ver returned both 0 and 1 with non-negligible probabilities, then either
Σ is not complete or an adversary is able to violate Σ™s strict unforgeability.
Proof. “Only if” - Given a Σ that is not strong, we will construct an environment Z
that can distinguish whether it is interacting with A and πΣ or S and Fsig .


1. Assume Σ is not complete, i.e. there exists m ∈ Mk such that Pr[(s, v) ← gen(1k ); σ ←
sig(s, m) : 0 ← ver(m, σ, v)] > neg(k) for in¬nitely many k™s. In this case, Z sets
sid = (P, 0) and activates some uncorrupted party P with input (KeyGen, sid), fol-
lowed by (Sign, sid, m). After obtaining v and σ, it then activates some other party
P with (Verify, sid, m, σ, v) and outputs the returned veri¬cation value.

When interacting with the ideal process, Z clearly always outputs 1, whereas in the
interaction with πΣ , Z outputs 0 with non-negligible property. Thus, a πΣ that realizes
Fsig must use a Σ that exhibits completeness.

2. Assume Σ is not strictly unforgeable; i.e. there exists a PPT forger G that can
produces a valid (m, σ) previously unknown to it. Z runs an internal copy of G and
hands it the public key v obtained from an uncorrupted party P. Whenever G asks

68
its oracle to sign a message m, Z activates P with input (Sign, sid = (P, 0), m), and
reports the output to G. When G generates a pair (m, σ), Z proceeds as follows:

(a) If (m, σ) was a previous response to G™s oracle queries, output 0 and halt.

(b) Otherwise, activate some other uncorrupted party with input (Verify, sid, m, σ)
and output the veri¬cation result.

Because v belongs to an uncorrupted party, when Z interacts with the ideal process, it
will always output 0. By the de¬nition of G, when Z interacts with πΣ in the concrete
world, it will output 1 with some non-negligible probability. Thus, a πΣ that realizes
Fsig must use a Σ that exhibits strict unforgeability.


Therefore, a πΣ realizes Fsig only if Σ is strong.


“If” - Assume Σ is strong but πΣ does not realize Fsig ; i.e. there is a real-life adver-
sary A such that for any ideal-process adversary S there exists an environment Z that can
distinguish between the interactions of A and πΣ or S and Fsig . Since Z succeeds for any S,
it must succeed for the following S:

• S runs a simulated copy of A, denoted AS .

• Any input from Z is forwarded to AS . AS ™s outputs are copied to S™s outputs.

• When S receives a message (KeyGen, sid, P) from Fsig , it checks if sid is of the form
(P, sid ). If it is not, S ignores the request. If it is, S runs (s, v) ← gen(1k ), records
s, and returns (Veri¬cationKey, sid, v) along with algorithms S = sig(s, ·) and V =
ver(·, ·, ·) to Fsig .

• When AS tries to corrupt some party P, S corrupts P in the ideal process. If P is the
signer, then S reveals the signing key s (and potentially the internal state of algorithm
sig) as the internal state of P.

Assume Σ is complete (otherwise the theorem is proven). We consider how the environ-
ment might distinguish between the ideal and concrete world. There are two cases:

1. a party P0 outputs a new valid signature under another honest party P1 ™s veri¬cation
key without ever having been told the new signature value or

69
2. a signature for a message that was never signed by an honest party P veri¬es under
the key associated with P.

(1) This could never happen in the ideal process, since Fsig will never verify a forged
signature under an uncorrupted veri¬cation key, so it must be this event happens with
non-negligible probability during Z™s interaction with the concrete world. P0 outputs the
new signature on the behest of either Z or AG “ if done on the behalf of Z, then this is no
di¬erent than situation (2), and we thus only consider if AG created this signature. If it
did, then A, with adaptive access to a signing oracle, is capable of producing a new valid
signature under a desired veri¬cation key with non-negligible probability. This violates Σ™s
strict unforgeability, so this must not be the case.
(2) This too could never happen in the ideal process, so it must be that the event
happens with non-negligible probability in the concrete world. We construct the following
PPT algorithm G:

• G runs a simulated copy of Z and simulates for Z an interaction with S in the ideal
process with Fsig (and some encryption functionality). Note that G plays the role of
both S and Fsig for simulated Z.

• Instead of actually simulating S, G runs a simulated copy of A and forwards its in-
puts/outputs accordingly. We label it AG .

• When S is asked to generate a veri¬cation key, instead of running gen, G gives AG
and the simulated Fsig the key, v, under which a forgery is desired.

• When the simulated Fsig is asked to sign m by the proper party, G asks its oracle to
sign m and then has Fsig return the obtained σ.

• Whenever the simulated Z activates some uncorrupted party with input
(Verify, sid, m, σ, v), G checks whether (m, σ) is a new valid signature pair previously
unseen. If it is, G outputs the pair and halts; if not, it continues the simulation. If
AG asks to corrupt the signer then G halts with a failure output.

Consider that AG outputs a new valid (m, σ) pair. If it doesn™t then by Σ™s completeness,
the nature of the ideal encryption™s realization, and our analysis of situation (1), Z™s view
of an interaction with A and πΣ is statistically close to its view of an interaction with

70
S and Fsig . Since we know Z distinguishes between these two cases with non-negligible
probability, it must be that, with non-negligible probability, AG asks for the signature on
a previously unseen signature pair that veri¬es. In this case G outputs a violation of Σ s
strict unforgeability with non-negligible probability. Since Σ is strong, this must not be the
case.
It then follows that the environment is unable to meaningfully distinguish between the
ideal and real worlds and thus πΣ realizes Fsig if Σ is strong.


Note that in the “if” direction, we allowed for adaptive adversaries, but the “only if”
direction required non-adaptive adversaries. This implies that if Σ is strong then πΣ securely
realizes Fsig against adaptive adversaries (but not vice versa).




71
72
Chapter 7


Adding Signatures to Universally
Composable Symbolic Analysis

Having rede¬ned the ideal functionalities as needed, we reconstruct the UCSA framework
now expanded to include digital signatures.

Protocol Checker
Dolev-Yao Model Dolev-Yao Model
P, {|·|}K e , [·] K v DY 2-party mutual authentication



Mapping Lemma
§7.4
§7.3



UC/Ideal World
UC/Ideal World
F2ma
p, Fcpke , Fcert




§6
Resulting UCSA proof



Real World
p, S, Σ



Figure 7-1: A graphical representation of the UCSA framework with signatures



73
7.1 Dolev-Yao Algebra

The Dolev-Yao algebra is rede¬ned to include the symbols necessary for signatures.

De¬nition 21 (The Dolev-Yao Message Algebra) Messages in the Dolev-Yao algebra
A are composed of atomic elements of the following types:

• Party identi¬ers (M) “ These are denoted by symbols P1 , P2 , .. for a ¬nite number of
names in the algebra. These are public and are associated with a role which is either
that of Initiator or Responder.

• Nonces (R) “ These can be thought of as a ¬nite number of private, unpredictable
random-strings. These symbols are denoted by R1 , R2 , ... and so on.

e e
• Public keys (KP ub ) “ These are denoted by symbols KP1 , KP2 , ... which are public and
each associated with a particular party identi¬er.

v v
• Veri¬cation keys (KV er ) “ These are denoted by symbols KP1 , KP2 , ... which are public
and each associated with a particular party identi¬er.

• A garbage term, written G, to represent ill-formed messages,

• ⊥, to represent an error or failure,

• Starting , to indicate that a protocol execution has begun, and

• Finished , to indicate that a protocol execution has ended.

Messages in the algebra can be compounded by the following symbolic operations:

• pair: A — A ’ A. When messages m and m are paired, we write m|m .

e
• encrypt : KP ub — A ’ A. When message m is encrypted with public key KP , we write
{|m|}K e
P


v
• sign : KV er — A ’ A When message m is signed for veri¬cation key KP , we write
[m] K v
P



.

De¬nition 22 (Closure) Let

74
• RAdv ‚ R be the set of nonces associated with the adversary,

e e
• KAdv = {KP : P ∈ MAdv } be the set of encryption keys belonging to corrupted parties
e
(KAdv ‚ KP ub ), and

v v
• KAdv = {KP : P ∈ MAdv } be the set of veri¬cation keys belonging to corrupted parties
v
(KAdv ‚ KV er ).

Then the closure of a set S ∈ A, written C[S ], is the smallest subset of A such that:

1. S ⊆ C[S ],

2. M ∪ KV er ∪ KP ub ∪ RAdv ⊆ C[S ],

e
3. If {|m|}K ∈ C[S ] and K ∈ KAdv , then m ∈ C[S ],

4. If m ∈ C[S ] and K ∈ KP ub , then {|m|}K ∈ C[S ],

1
5. If [m] K ∈ C[S ] then m ∈ C[S ],

v
6. If m ∈ C[S ] and K ∈ KAdv , then [m] K ∈ C[S ],

7. If m|m ∈ C[S ], then m ∈ C[S ] and m ∈ C[S ], and

8. If m ∈ C[S ] and m ∈ C[S ], then m|m ∈ C[S ].


The algebra remains free under these changes. Figure 7-2 shows an updated example
symbolic message parse tree.


De¬nition 23 (Dolev-Yao Trace) We inductively de¬ne a Dolev-Yao trace t for protocol
P as a description of events that occur during the execution of P.


t = H0 H1 H2 ... Hn’2 Hn1 Hn

where event Hi is either


• of the form [“input”, P, oi , P , S ], that indicates the initial input of participant P to
take the role oi and interact with participant P , assuming initial internal state S .
1
As explained in Section 2.2.2, we only consider signature schemes that are message revealing or, alter-
natively, only valid when transmitted with their message.


75
v
R1 | {|P1 |}K e |R2 | KP0 e
KP
v
P0 KP 0 e
KP
1
1




v
R1 | {|P1 |}K e |R2 | KP0 e
KP1
e

<< . .

 7
( 9)



. . >>

Copyright Design by: Sunlight webdesign