LINEBURG


<< . .

 6
( 9)



. . >>


A’B: m
A ← B : encA {m, sigB {m}}


Figure 4-5: A mixed protocol


Again, a passive adversary should only learn m and encA {m, sigB {m}}, but in an envi-
ronment with ideal functionalities (Figure 4-6), a passive adversary learns the value for m,
sigB {m} = σ, and encA {m, sigB {m}} = c.

As with public key encryption ideal functionality, this will require the use of a random-
ized function for generating signature values.

54
A’B:R
B ’ Fcert : (sign, m)
Fcert ’ Adv : (sign, B)
Fcert ← Adv : (signature, σ)
B ← Fcert : (signature, σ)
B ’ Fcpke : (encrypt, A, (m, σ))
Fcpke ’ Adv : (encrypt, A)
Fcpke ← Adv : (ciphertext, c)
B ← Fcpke : (ciphertext, c)
A←B:c


Figure 4-6: The mixed protocol, expanded


4.3.3 Signature Looseness

Many common de¬nitions of signature security allow for public modi¬cation of signatures.
Intuitively, a signature scheme loses little by allowing an adversary to construct a valid
message signature pair (m, σ) if it already knows of a di¬erent valid signature σ on message
m. In fact, it seems almost worrisome that a scheme that protects against such adversarial
action might be prohibitively restrictive “ perhaps forcing signers to only sign a message
once.

It becomes di¬cult, however, for ideal functionalities to allow entities to create alternate
signatures if it is unclear to the functionality at the time of veri¬cation that the authoring
entity had seen a valid signature at the time of creation. In Fsig , if a message has been
validly signed in the past, then the adversary is allowed to choose the validity of any
proposed alternate signatures. This presents a problem when ideal signature functionality
is combined with ideal encryption functionality.

Consider a “vouching” protocol. Party A is taking messages, but only from parties who
are currently authorized to do so by some third party C whom A does not have direct
communication with. Parties A, B, C might engage in a protocol like that of Figure 4-7 to
allow B to send the message to A.

A ’ B : sigA {R}
B ’ C : sigA {R}
B ← C : encB {sigC {R}}
A ← B : encA {m, sigC {R}C }


Figure 4-7: A vouching protocol

55
When A veri¬es the ¬nal message of the protocol, the adversary has not yet seen C™s
signature of R. This does not, however, stop the adversary from stepping in at the ¬nal
stage of the protocol with a forgery of C™s signature:


A ← Adv : encA {m , sigC {R} }

Because a valid signature for R exists under C™s key, when A goes to Fcert to verify this
forgery, the adversary can choose to have Fcert accept the signature, meaning the adversary
has produced C™s signature on R without knowing it beforehand.
This problem will require us to examine the looseness of the ideal signature formulation
(Section 5.2) and schemes that realize it (Section 6.2).




56
Chapter 5


Rede¬ning Ideal Functionalities

We rede¬ne the formalizations of ideal functionalities Fcpke , Fsig , Fcert based on the obser-
vations of Chapter 4. These new formalizations make use of “statistically unpredictable”
algorithms which are algorithms that, intuitively, “look random,” even to an adversary that
has been able to observe the algorithm™s output on adaptively chosen inputs.
To be more precise, any adversary, even a computationally unbounded one with adaptive
oracle access to G, is unable to guess, with non-negligible probability, G™s next output for
any input. We formally de¬ne statistically unpredictable algorithms as follows:


De¬nition 18 (Statistically Unpredictable) PPT algorithm G is statistically unpre-
dictable with respect to k (“statistically unpredictable” for short) if for any computationally
unbounded adversary A interacting with an instance of G with no initial state,

k
Pr[ (x, y) ← AG(·,1 ) (1k );
y ← G(x, 1k ) :
y=y ] < neg(k)

Note that the instance of G that our adversary has oracle access to is the same instance
that produces output y (after our adversary has announced its guess y).
A statistically unpredictable algorithm may maintain state as long as it remains unpre-
dictable with each call. We do, however, require that the algorithm be locally executable,
i.e. G(x) is executable on a PPT interactive Turing machine requiring no additional inputs
other than x. While the adversary is allowed to interact with the same instance of G all it
wants, it does not have access to G™s randomness.

57
Statistical unpredictability di¬ers from pseudo-randomness because the algorithm is
randomized and is even unpredictable on inputs selected by the adversary. This de¬nition is
also di¬erent from semantic security (where an adversary cannot tell the di¬erence between
G(m) and G(m ) for m and m of its choosing) since we do not care if m is recoverable from
G(m), merely that G(m) is hard to predict for any m.
Fpke /Fcpke and Fsig /Fcert maintain lists of acceptable encryption (E) or signing (S)
algorithms, respectively, that are known to be poly-time and statistically unpredictable. In
the initial steps, when the adversary provides the algorithm descriptions, the adversary is
only allowed to choose algorithms from this list. This is analogous to an adversary picking
from the suite of encryption/signature schemes that are known to have the aforementioned
properties.
Note that we do not check, when creating ciphertexts/signatures, if the generated value
has either been created before or used by the adversary. By the statistical unpredictability
of the relative algorithms, the probability that a collision happens is negligible.



5.1 Public Key Encryption

In order to address the weakness in Fpke described in Section 4.2, we rede¬ne the ideal
functionality (Figure 5-1) using the notion of statistical unpredictability. Users of public
key encryption want a functionality which allows them to use another party™s public key
to encrypt a message such that the other party can decipher the message, but anyone else
who sees the ciphertext learns nothing about the plaintext.
Informally, we want to simulate the functionality needed by users of the public key
encryption while giving the adversary as much power as possible. When a party needs
to generate a public key, we let the adversary pick the value for the key. When we need
to encrypt a message m, we run a statistically unpredictable algorithm, E, selected by the
adversary on 0|m| . We store the resulting ciphertext c along with plaintext m in our memory
and hand back c.1 If the encrypting party uses the wrong encryption key, then we give m
to the adversary and return a ciphertext c of the adversary™s choosing (as long as c isn™t a
repeat). When the owner of the public key wants to decrypt a ciphertext, we check if we
created the ciphertext “ if so, we return the plaintext we remember for that ciphertext; if
Because E is only given 0|m| , it is impossible to guess anything about m other than its length. Because
1

E is statistically unpredictable, the adversary is unable to guess the value of c.


58
not, we run a decryption algorithm D that the adversary gave us when it created the public
key.

Functionality Fpke

Fpke proceeds as follows over message domain {0, 1}—. The SID is assumed to consist of a
pair SID = (PIDowner , SID ), where PIDowner is the identity of a special party, called the
owner of this instance.

Key Generation: Upon receiving a value (KeyGen, sid) from some party S, verify that
sid = (S, sid ) for some sid . If not, then ignore the request. Else, do the following:
1. Hand (KeyGen, sid) to the adversary.
2. Receive a public key value e from the adversary and hand it to S. If the ad-
versary has not already done so, it also provides descriptions of statistically
unpredictable polytime algorithm E and deterministic polytime algorithm D.
3. Record the value e.
Encryption: Upon receiving a value (Encrypt, SID, e , m) from a party P proceed as follows:
1. If m ∈ Dk return an error message to P.
2. If e = e, set c = E(0|m| , 1k ), record pair (c, m), and return c to P.
3. If e = e, hand (Encrypt, sid, e , m) to the adversary and set c to its response. If
c already appears in a previously recorded pair, then return an error message to
P, otherwise return c to P.
Decryption: Upon receiving a value (Decrypt, SID, c) from the owner of this instance, pro-
ceed as follows. (If the input is received from another party then ignore.)
1. If there is a recorded pair (c, m), then hand m to P.
2. Otherwise, compute m = D(c), and hand m to P.

Figure 5-1: The public-key encryption functionality, Fpke



Combining Fpke with ideal key registration functionality Freg , we can create ideal certi-
¬ed public key encryption functionality Fcpke [16], as described in Figure 5-2. In Fcpke we
don™t have to worry about a party using the wrong key to encrypt a message.



5.2 Signatures and Certi¬cation with Encryption

A user of a signatures scheme desires the ability to output a veri¬cation key such that only
the user can produce the signature for a message under the veri¬cation key, meaning any
message accompanied with a valid signature must have been signed by the user.
As discussed in Section 4.3.3, loose signatures present a problem when formulating an
ideal signature functionality Fsig in a model containing encryption functionality. In order to

59
Functionality Fcpke

Fcpke proceeds as follows over message domain {0, 1}—. The SID is assumed to consist of
a pair SID = (PIDowner , SID ), where PIDowner is the identity of a special party, called the
owner of this instance.

Initialization: Expect the adversary to provide the descriptions of statistically unpre-
dictable polytime algorithm E and deterministic polytime algorithm, D.
Encryption: Upon receiving a value (Encrypt, SID, m) from a party P proceed as follows:
1. Set c = E(0|m| ).
2. Record pair (c, m), and hand c to P.
Decryption: Upon receiving a value (Decrypt, SID, c) from the owner of this instance, pro-
ceed as follows. (If the input is received from another party then ignore.)
1. If there is a recorded pair (c, m), then hand m to P.
2. Otherwise, compute m = D(c), and hand m to P.

Figure 5-2: The certi¬ed public-key encryption functionality, Fcpke



achieve such a formulation we must rede¬ne Fsig to no longer allow loose signatures (Figure
5-3).
The description of ideal signature functionality is very similar to ideal public key en-
cryption but with some important di¬erences. When asked to sign m, we give m to S since
we don™t care if signature σ reveals m, we only care that σ is hard for the adversary to
guess. For each message/signature pair we see, we remember the veri¬cation key it was
associated with and whether or not the signature was valid (the last bit b in the four-tuple).
We always honor signatures we created and act consistently on any message/signature pair
we™ve seen before. If someone tries to verify a forged message/signature pair under the
correct veri¬cation key, we reject it; if they™re verifying with the wrong key, then we allow
the adversarial algorithm V to choose the signature™s validity.
The ideal signing functionality Fsig can be combined with an ideal certi¬cation authority
(Fca , as described in [15]) to create an ideal certi¬cation functionality Fcert , described in
Figure 5-4. This allows us to remove the case when a party tries to verify with the wrong
key.




60
Functionality Fsig

Key Generation: Upon receiving a value (KeyGen, sid) from some party S, verify that
sid = (S, sid ) for some sid . If not, then ignore the request. Else, hand (KeyGen, sid)
to the adversary. Upon receiving (VerificationKey, sid, v, S, V) from the adversary,
output (VerificationKey, sid, v) to S, and record (S, v, S, V). S and V are the de-
scriptions of a statistically unpredictable polytime algorithm and a polytime algo-
rithm, respectively. v is a veri¬cation key.
Signature Generation: Upon receiving a value (Sign, sid, m) from S,
1. Verify that sid = (S, sid ) for some sid . If not, then ignore the request.
2. Set σ = S(m).
3. Output (Signature, sid, m, σ) to S and record the entry (m, σ, 1).
Signature Veri¬cation: Upon receiving a value (Verify, SID, m, σ, v ) from some party
P, proceed as follows.
1. If there is an entry (m, σ, b ) recorded, then set b = b .
2. Else, if v = v and the signer is not corrupted, then set b = 0 and record the
entry (m, σ, 0).
3. Else, set b = V(m, σ, v )
Output (Verified, sid, m, b) to P.

Figure 5-3: The signature functionality, Fsig




Functionality Fcert

Initialization: Expect the adversary to provide the descriptions of statistically unpre-
dictable, polytime algorithm S and polytime algorithm, V.
Signature Generation: Upon receiving a value (Sign, sid, m) from S,
1. Verify that sid = (S, sid ) for some sid . If not, then ignore the request.
2. Set σ = S(m).
3. Output (Signature, sid, m, σ) to S and record the entry (m, σ, 1).
Signature Veri¬cation: Upon receiving a value (Verify, SID, m, σ) from some party P,
proceed as follows.
1. If there is an entry (m, σ, b ) recorded, then set b = b .
2. Else, if the signer is not corrupted, set b = 0 and record the entry (m, σ, 0).
3. Else, set b = V(m, σ) and record the entry (m, σ, b).
Output (Verified, sid, m, b) to P.

Figure 5-4: The certi¬cation functionality, Fcert




61
62
Chapter 6


Realizing Ideal Functionalities

In papers concerning ideal functionalities, the relation between a concrete scheme and an
ideal functionality is generally described as an equivalence, meaning not only does a partic-
ular security de¬nition realize the ideal functionality, but the ideal functionality implies the
security de¬nition is needed. These papers generally assume, however, that the concrete
scheme will use local stateless algorithms. Under this assumption, we too will show that
our new ideal functionality formulations are in equivalence with existing security de¬nitions.
However, as discussed in Section 4.1, these security de¬nitions are not su¬cient for proofs
involving algorithms that do not meet this criteria



6.1 Public Key Encryption

In our new formulation of Fpke (Figure 5-1) the adversary™s powers are strictly weaker than
in the previous formulation (Figure 3-3 from [18]). This implies that any local stateless
encryption schemes will still need to be at least indistinguishable under a 2-stage chosen
ciphertext attack (IND-CCA2) in order to realize Fpke . It remains to be shown, however,
that IND-CCA2 security is strong enough to realize this new Fpke .


De¬nition 19 De¬ne protocol πS using encryption scheme S as follows

1. When activated, within some Pi and with input (KeyGen, id), run algorithm gen, out-
put the encryption key e and record the decryption key d.

2. When activated, within some party Pj and with input (Encrypt, id, e , m), return enc(e , m).

63
3. When activated, within Pi and with input (Decrypt, id, c), return dec(d, c).


Theorem 4 Let S = (gen, enc, dec) be an encryption scheme over domain D composed of
local, stateless algorithms. Then πS securely realizes Fpke with respect to domain D and
non-adaptive adversaries if and only if S is IND-CCA2 secure.


Proof. “Only if” - Assume that πS securely realizes Fpke but S is not IND-CCA2
secure. This implies that either S is not complete, or there exists an adversary F that can
win the CCA game (Figure 2-1) with a non-negligible advantage.


1. Assume Σ is not complete, i.e. there exists m ∈ Mk such that Pr[(e, d) ← gen(1k ); c ←
enc(e, m); m ← dec(d, c) : m = m ] > neg(k) for in¬nitely many k™s. In this case,
Z sets sid = (P, 0) and activates some uncorrupted party P with input (KeyGen, sid)
to obtain encryption key e. It then activates some other (uncorrupted) party with
(Encrypt, sid, m) to produce c. Z reactivates P with (Decrypt, sid, c) to obtain m . and
outputs whether m = m

When interacting with the ideal process, Z clearly always outputs 1, whereas in the
interaction with πS , Z will output 0 with non-negligible property. Thus, a πS that
realizes Fpke must use a S that exhibits completeness.

2. Assume there exists an adversary F for S who wins the CCA game with a non-
negligible advantage. Using F , we will construct an environment Z that can distin-
guish between interactions in the ideal process and in real-life.

When interacting with a network with two uncorrupted parties P1 and P2 , Z simulates
a copy of F and does the following:


(a) Z activates P1 with input (KeyGen, sid) for some random sid. It takes the public
key e that is output by P1 and hands it to F .

(b) When F makes a decryption query c, Z activates P1 with input (Decrypt, sid, c)
and hands the resulting plaintext m back to F .

(c) When F outputs its two plaintext selections m0 and m1 , Z chooses a bit b at ran-
dom and activates P1 with input (Encrypt, sid, e, mb ). It then hands the resulting
challenge ciphertext c— to F .

64
(d) When F makes a subsequent decryption query c, Z checks if c = c— . If it does,
Z outputs a random bit and halts. As long as c = c— , Z activates P1 with input
(Decrypt, sid, c) and hands the resulting plaintext m back to F .

(e) When F outputs a guess b , Z outputs b • b and halts.


When Z is operating in the real-life model with adversary A and πS , its simulated F
sees a IND-CCA2 game using encryption scheme S. This implies that F will output
1
b = b with probability 2 + where is a non-negligible advantage, causing Z to output
1
0 with probability ≥ +.
2

When Z is operating in the ideal process with any ideal adversary S and Fpke , then
the challenge ciphertext returned to F is generated independent of b. The ciphertext
c— is actually the output of E(0|mb | ) where |m0 | = |m1 |. This means F is playing a
guessing game where no advantage is possible and will guess b = b with probability
1
exactly 2 .

Thus Z outputs 0 with non-negligibly higher probability when interacting with the
real world, violating our initial assumption. Thus there must not exist an adversary
F who can regularly win the CCA game under S.


This implies that if πS securely realizes Fpke , then S is IND-CCA2 secure.


“If” - Assume S is a local, stateless IND-CCA2 secure encryption scheme but that
πS does not realize Fpke , i.e. there is a real-life adversary A such that for any ideal-process
adversary S there exists an environment Z that can distinguish whether it is interacting
with A and πS or S and Fpke . Since Z succeeds for any S, it must succeed for the following
S:


• S runs a simulated copy of A, denoted AS .


• Any input from Z is forwarded to AS . AS ™s outputs are copied to S™s outputs.

<< . .

 6
( 9)



. . >>

Copyright Design by: Sunlight webdesign