LINEBURG

ńņšąķčöą 7 |

ā¢ When S receives request (KeyGen, sid) from Fpke , S runs gen(1k ) to obtain public

encryption key e and private decryption key d. It then returns e as well as E = enc(e, Ā·)

and D = dec(d, Ā·) to Fpke .

65

Assume there is an environment Z that can distinguish between interactions. Without

loss of generality, we assume that in each execution, there are n parties asked to create

encryption keys and Z asks for each key to encrypt exactly p messages. We use Z to

construct the following algorithm G:

R R

1. G receives a public key e and chooses numbers j ā {1, ..., n} and h ā {1, ..., p}.

2. G simulates a copy of Z and simulates its interaction with a system running ĻS and

A.

3. When Z activates party Pj with input (KeyGen, sid), G has Pj output the value e from

Gā™s input. (All other parties run gen when asked to generate a key).

4. At ļ¬rst, when Z instructs a party to encrypt a message m(l,i) under ePl (where ePl is

Pl ā™s encryption key), the encrypter outputs c(l,i) = enc(ePl , m(l,i) ). (For l = j, output

enc(e, m(j,i) )).

5. At the h-th request to encrypt a message under e (m(j,h) ), G outputs (m0 , m(j,h) )

to its challenger for the IND-CCA2 game, where m0 is a ļ¬xed message in D. Upon

receiving back challenge ciphertext cā— , it has the encrypter output c(j,h) = cā— as the

encryption of m(j,h) .

6. Subsequently, when Z instructs a party to encrypt a message m to under ePl , G has

the encrypter output enc(ePl , m0 ), where m0 is the same ļ¬xed message as before. (For

l = j, output enc(e, m0 )).

7. Whenever a party Pl is activated with input (Decrypt, sid, c) where c = c(l,i) for some i,

G lets Pl output plaintext m(l,i) (even for those cases where the c(l,i) = enc(ePl , m0 )).

If Pl = Pj and c = c(l,i) for all i, then Pl responds with dec(dPl , c). If l = j but

c = c(j,i) for all i, then G sends c to the decryption oracle and has Pj output the

returned plaintext m.

8. When Z outputs bit b and halts, G outputs b and halts as well.

We now argue that G succeeds at guessing b with non-negligible probability using a

hybrid argument. Let Z(Ļ) denote the output of Z after observing a simulated execution

in which case, Z sees ciphertexts c1 , ...., cĻā’1 , cĻ , cĻ+1 , ...cnpā’1 , cnp where ci for i ā„ Ļ is the

66

encryption of ļ¬xed message m0 under the appropriate encryption key. Note that when

all simulated parties (Z+Pl ā™s) have the same randomness, the messages requested for en-

cryption, up to the Ļth message, will be the same for Z(Ļ) and Z(Ļ + 1), but subsequent

encryption requests may not be the same.

By our original assumption, Z is able to distinguish between the ideal process (Z(1))

and the real world (Z(p + 1)) with non-negligible probability ; this implies | Pr[Z(1) =

0] ā’ Pr[Z(p + 1) = 0]| > . Without loss of generality, let us say that in fact

Pr[Z(1) = 0] ā’ Pr[Z(p + 1) = 0] > .

This implies there exists a Ļ such that

Pr[Z(Ļ ) = 0] ā’ Pr[Z(Ļ + 1) = 0] > .

np

Assume G guesses (j, h) such that m(j,h) is the Ļ th message. This means that if G was

given back cĻ = enc(e, m0 ), then G will output Z(Ļ ). Conversely, G will output Z(Ļ + 1)

if cĻ = enc(e, m(j,h) ). Thus when m(j,h) is the Ļ th message, G will output b = b with

1 1

probability + 2np . Because j, h are random, we hit Ļ with probability and so G

2 np

1

guesses b with probability + , which is a non-negligible advantage.

2(np)2

2

The existence of G is a contradiction of Sā™s IND-CCA2 security. It must then be that

there does not exist an Z that is able to distinguish between interactions in the ideal

process with Fpke and the real world with ĻS . Thus, for a local, stateless IND-CCA2

secure encryption scheme S, ĻS securely realizes Fpke .

6.2 Signatures

As before, the adversaryā™s powers are strictly weaker in our new formulation of Fsig (Figure

5-3) as compared to the previous one (Figure 4-3 from [15]), implying that a local stateless

scheme must be at least EUF-ACMA to realize Fsig . In this case, however, we can go

one step further and show that any realizing scheme must be at least a strong signature

scheme. We also show that strong signature schemes may be used to securely realize this

new formulation of Fsig .

Deļ¬nition 20 Deļ¬ne protocol ĻĪ£ using signature scheme Ī£ as follows

67

1. When activated, within some Pi and with input (KeyGen, id), run algorithm gen, out-

put the veriļ¬cation key v and record the signing key s.

2. When activated, within Pi and with input (Sign, id, m), return sig(s, m).

3. When activated, within some party Pj and with input (Verify, id, v , m, Ļ), return

ver(v , m, Ļ).

Theorem 5 Let Ī£ = (gen, sig, ver) be a signature scheme over domain D composed of

local, stateless algorithms. Then ĻĪ£ securely realizes Fsig with respect to domain D and

non-adaptive adversaries if and only if Ī£ is strong (Deļ¬nition 6).

Note that neither the Fsig formulation nor the strong security deļ¬nition requires that

veriļ¬cation be a deterministic process. We allow for probabilistic veriļ¬cation algorithms in

order to give as general a theorem statement as possible ā“ in practice, however, deterministic

veriļ¬cation algorithms are almost always used. Strong security itself, does imply a sort of

veriļ¬cation āconsistencyā already; if it were possible for an adversary to produce a message

and signature where ver returned both 0 and 1 with non-negligible probabilities, then either

Ī£ is not complete or an adversary is able to violate Ī£ā™s strict unforgeability.

Proof. āOnly ifā - Given a Ī£ that is not strong, we will construct an environment Z

that can distinguish whether it is interacting with A and ĻĪ£ or S and Fsig .

1. Assume Ī£ is not complete, i.e. there exists m ā Mk such that Pr[(s, v) ā gen(1k ); Ļ ā

sig(s, m) : 0 ā ver(m, Ļ, v)] > neg(k) for inļ¬nitely many kā™s. In this case, Z sets

sid = (P, 0) and activates some uncorrupted party P with input (KeyGen, sid), fol-

lowed by (Sign, sid, m). After obtaining v and Ļ, it then activates some other party

P with (Verify, sid, m, Ļ, v) and outputs the returned veriļ¬cation value.

When interacting with the ideal process, Z clearly always outputs 1, whereas in the

interaction with ĻĪ£ , Z outputs 0 with non-negligible property. Thus, a ĻĪ£ that realizes

Fsig must use a Ī£ that exhibits completeness.

2. Assume Ī£ is not strictly unforgeable; i.e. there exists a PPT forger G that can

produces a valid (m, Ļ) previously unknown to it. Z runs an internal copy of G and

hands it the public key v obtained from an uncorrupted party P. Whenever G asks

68

its oracle to sign a message m, Z activates P with input (Sign, sid = (P, 0), m), and

reports the output to G. When G generates a pair (m, Ļ), Z proceeds as follows:

(a) If (m, Ļ) was a previous response to Gā™s oracle queries, output 0 and halt.

(b) Otherwise, activate some other uncorrupted party with input (Verify, sid, m, Ļ)

and output the veriļ¬cation result.

Because v belongs to an uncorrupted party, when Z interacts with the ideal process, it

will always output 0. By the deļ¬nition of G, when Z interacts with ĻĪ£ in the concrete

world, it will output 1 with some non-negligible probability. Thus, a ĻĪ£ that realizes

Fsig must use a Ī£ that exhibits strict unforgeability.

Therefore, a ĻĪ£ realizes Fsig only if Ī£ is strong.

āIfā - Assume Ī£ is strong but ĻĪ£ does not realize Fsig ; i.e. there is a real-life adver-

sary A such that for any ideal-process adversary S there exists an environment Z that can

distinguish between the interactions of A and ĻĪ£ or S and Fsig . Since Z succeeds for any S,

it must succeed for the following S:

ā¢ S runs a simulated copy of A, denoted AS .

ā¢ Any input from Z is forwarded to AS . AS ā™s outputs are copied to Sā™s outputs.

ā¢ When S receives a message (KeyGen, sid, P) from Fsig , it checks if sid is of the form

(P, sid ). If it is not, S ignores the request. If it is, S runs (s, v) ā gen(1k ), records

s, and returns (Veriļ¬cationKey, sid, v) along with algorithms S = sig(s, Ā·) and V =

ver(Ā·, Ā·, Ā·) to Fsig .

ā¢ When AS tries to corrupt some party P, S corrupts P in the ideal process. If P is the

signer, then S reveals the signing key s (and potentially the internal state of algorithm

sig) as the internal state of P.

Assume Ī£ is complete (otherwise the theorem is proven). We consider how the environ-

ment might distinguish between the ideal and concrete world. There are two cases:

1. a party P0 outputs a new valid signature under another honest party P1 ā™s veriļ¬cation

key without ever having been told the new signature value or

69

2. a signature for a message that was never signed by an honest party P veriļ¬es under

the key associated with P.

(1) This could never happen in the ideal process, since Fsig will never verify a forged

signature under an uncorrupted veriļ¬cation key, so it must be this event happens with

non-negligible probability during Zā™s interaction with the concrete world. P0 outputs the

new signature on the behest of either Z or AG ā“ if done on the behalf of Z, then this is no

diļ¬erent than situation (2), and we thus only consider if AG created this signature. If it

did, then A, with adaptive access to a signing oracle, is capable of producing a new valid

signature under a desired veriļ¬cation key with non-negligible probability. This violates Ī£ā™s

strict unforgeability, so this must not be the case.

(2) This too could never happen in the ideal process, so it must be that the event

happens with non-negligible probability in the concrete world. We construct the following

PPT algorithm G:

ā¢ G runs a simulated copy of Z and simulates for Z an interaction with S in the ideal

process with Fsig (and some encryption functionality). Note that G plays the role of

both S and Fsig for simulated Z.

ā¢ Instead of actually simulating S, G runs a simulated copy of A and forwards its in-

puts/outputs accordingly. We label it AG .

ā¢ When S is asked to generate a veriļ¬cation key, instead of running gen, G gives AG

and the simulated Fsig the key, v, under which a forgery is desired.

ā¢ When the simulated Fsig is asked to sign m by the proper party, G asks its oracle to

sign m and then has Fsig return the obtained Ļ.

ā¢ Whenever the simulated Z activates some uncorrupted party with input

(Verify, sid, m, Ļ, v), G checks whether (m, Ļ) is a new valid signature pair previously

unseen. If it is, G outputs the pair and halts; if not, it continues the simulation. If

AG asks to corrupt the signer then G halts with a failure output.

Consider that AG outputs a new valid (m, Ļ) pair. If it doesnā™t then by Ī£ā™s completeness,

the nature of the ideal encryptionā™s realization, and our analysis of situation (1), Zā™s view

of an interaction with A and ĻĪ£ is statistically close to its view of an interaction with

70

S and Fsig . Since we know Z distinguishes between these two cases with non-negligible

probability, it must be that, with non-negligible probability, AG asks for the signature on

a previously unseen signature pair that veriļ¬es. In this case G outputs a violation of Ī£ s

strict unforgeability with non-negligible probability. Since Ī£ is strong, this must not be the

case.

It then follows that the environment is unable to meaningfully distinguish between the

ideal and real worlds and thus ĻĪ£ realizes Fsig if Ī£ is strong.

Note that in the āifā direction, we allowed for adaptive adversaries, but the āonly ifā

direction required non-adaptive adversaries. This implies that if Ī£ is strong then ĻĪ£ securely

realizes Fsig against adaptive adversaries (but not vice versa).

71

72

Chapter 7

Adding Signatures to Universally

Composable Symbolic Analysis

Having redeļ¬ned the ideal functionalities as needed, we reconstruct the UCSA framework

now expanded to include digital signatures.

Protocol Checker

Dolev-Yao Model Dolev-Yao Model

P, {|Ā·|}K e , [Ā·] K v DY 2-party mutual authentication

Mapping Lemma

Ā§7.4

Ā§7.3

UC/Ideal World

UC/Ideal World

F2ma

p, Fcpke , Fcert

Ā§6

Resulting UCSA proof

Real World

p, S, Ī£

Figure 7-1: A graphical representation of the UCSA framework with signatures

73

7.1 Dolev-Yao Algebra

The Dolev-Yao algebra is redeļ¬ned to include the symbols necessary for signatures.

Deļ¬nition 21 (The Dolev-Yao Message Algebra) Messages in the Dolev-Yao algebra

A are composed of atomic elements of the following types:

ā¢ Party identiļ¬ers (M) ā“ These are denoted by symbols P1 , P2 , .. for a ļ¬nite number of

names in the algebra. These are public and are associated with a role which is either

that of Initiator or Responder.

ā¢ Nonces (R) ā“ These can be thought of as a ļ¬nite number of private, unpredictable

random-strings. These symbols are denoted by R1 , R2 , ... and so on.

e e

ā¢ Public keys (KP ub ) ā“ These are denoted by symbols KP1 , KP2 , ... which are public and

each associated with a particular party identiļ¬er.

v v

ā¢ Veriļ¬cation keys (KV er ) ā“ These are denoted by symbols KP1 , KP2 , ... which are public

and each associated with a particular party identiļ¬er.

ā¢ A garbage term, written G, to represent ill-formed messages,

ā¢ ā„, to represent an error or failure,

ā¢ Starting , to indicate that a protocol execution has begun, and

ā¢ Finished , to indicate that a protocol execution has ended.

Messages in the algebra can be compounded by the following symbolic operations:

ā¢ pair: A Ć— A ā’ A. When messages m and m are paired, we write m|m .

e

ā¢ encrypt : KP ub Ć— A ā’ A. When message m is encrypted with public key KP , we write

{|m|}K e

P

v

ā¢ sign : KV er Ć— A ā’ A When message m is signed for veriļ¬cation key KP , we write

[m] K v

P

.

Deļ¬nition 22 (Closure) Let

74

ā¢ RAdv ā‚ R be the set of nonces associated with the adversary,

e e

ā¢ KAdv = {KP : P ā MAdv } be the set of encryption keys belonging to corrupted parties

e

(KAdv ā‚ KP ub ), and

v v

ā¢ KAdv = {KP : P ā MAdv } be the set of veriļ¬cation keys belonging to corrupted parties

v

(KAdv ā‚ KV er ).

Then the closure of a set S ā A, written C[S ], is the smallest subset of A such that:

1. S ā C[S ],

2. M āŖ KV er āŖ KP ub āŖ RAdv ā C[S ],

e

3. If {|m|}K ā C[S ] and K ā KAdv , then m ā C[S ],

4. If m ā C[S ] and K ā KP ub , then {|m|}K ā C[S ],

1

5. If [m] K ā C[S ] then m ā C[S ],

v

6. If m ā C[S ] and K ā KAdv , then [m] K ā C[S ],

7. If m|m ā C[S ], then m ā C[S ] and m ā C[S ], and

8. If m ā C[S ] and m ā C[S ], then m|m ā C[S ].

The algebra remains free under these changes. Figure 7-2 shows an updated example

symbolic message parse tree.

Deļ¬nition 23 (Dolev-Yao Trace) We inductively deļ¬ne a Dolev-Yao trace t for protocol

P as a description of events that occur during the execution of P.

t = H0 H1 H2 ... Hnā’2 Hn1 Hn

where event Hi is either

ā¢ of the form [āinputā, P, oi , P , S ], that indicates the initial input of participant P to

take the role oi and interact with participant P , assuming initial internal state S .

1

As explained in Section 2.2.2, we only consider signature schemes that are message revealing or, alter-

natively, only valid when transmitted with their message.

75

v

R1 | {|P1 |}K e |R2 | KP0 e

KP

v

P0 KP 0 e

KP

1

1

v

R1 | {|P1 |}K e |R2 | KP0 e

KP1

e

ńņšąķčöą 7 |

Copyright © Design by: Sunlight webdesign